00:00:00.000 Started by upstream project "autotest-per-patch" build number 127103 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.080 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.130 Fetching changes from the remote Git repository 00:00:00.132 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.187 Using shallow fetch with depth 1 00:00:00.187 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.187 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.698 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.710 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.721 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.721 > git config core.sparsecheckout # timeout=10 00:00:06.733 > git read-tree -mu HEAD # timeout=10 00:00:06.748 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.807 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.807 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.952 [Pipeline] Start of Pipeline 00:00:06.968 [Pipeline] library 00:00:06.971 Loading library shm_lib@master 00:00:06.971 Library shm_lib@master is cached. Copying from home. 00:00:06.984 [Pipeline] node 00:00:07.002 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.004 [Pipeline] { 00:00:07.012 [Pipeline] catchError 00:00:07.013 [Pipeline] { 00:00:07.034 [Pipeline] wrap 00:00:07.041 [Pipeline] { 00:00:07.048 [Pipeline] stage 00:00:07.049 [Pipeline] { (Prologue) 00:00:07.062 [Pipeline] echo 00:00:07.064 Node: VM-host-SM17 00:00:07.068 [Pipeline] cleanWs 00:00:07.076 [WS-CLEANUP] Deleting project workspace... 00:00:07.076 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.082 [WS-CLEANUP] done 00:00:07.252 [Pipeline] setCustomBuildProperty 00:00:07.321 [Pipeline] httpRequest 00:00:07.347 [Pipeline] echo 00:00:07.348 Sorcerer 10.211.164.101 is alive 00:00:07.353 [Pipeline] httpRequest 00:00:07.356 HttpMethod: GET 00:00:07.357 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.358 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.364 Response Code: HTTP/1.1 200 OK 00:00:07.365 Success: Status code 200 is in the accepted range: 200,404 00:00:07.366 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.708 [Pipeline] sh 00:00:09.987 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.004 [Pipeline] httpRequest 00:00:10.032 [Pipeline] echo 00:00:10.034 Sorcerer 10.211.164.101 is alive 00:00:10.043 [Pipeline] httpRequest 00:00:10.047 HttpMethod: GET 00:00:10.048 URL: http://10.211.164.101/packages/spdk_0c322284fc8cbedc534a5a5ba162764d1e9319da.tar.gz 00:00:10.048 Sending request to url: http://10.211.164.101/packages/spdk_0c322284fc8cbedc534a5a5ba162764d1e9319da.tar.gz 00:00:10.065 Response Code: HTTP/1.1 200 OK 00:00:10.065 Success: Status code 200 is in the accepted range: 200,404 00:00:10.066 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_0c322284fc8cbedc534a5a5ba162764d1e9319da.tar.gz 00:01:32.948 [Pipeline] sh 00:01:33.228 + tar --no-same-owner -xf spdk_0c322284fc8cbedc534a5a5ba162764d1e9319da.tar.gz 00:01:36.524 [Pipeline] sh 00:01:36.803 + git -C spdk log --oneline -n5 00:01:36.803 0c322284f scripts/nvmf_perf: move SPDK target specific parameters 00:01:36.803 33352e0b6 scripts/perf_nvmf: move sys_config to Server class 00:01:36.803 0ce8280fe scripts/nvmf_perf: remove bdev information from output 00:01:36.803 e0435b1e7 scripts/nvmf_perf: add server factory function 00:01:36.804 920322689 scripts/nvmf_perf: set initiator num_cores earlier 00:01:36.822 [Pipeline] writeFile 00:01:36.838 [Pipeline] sh 00:01:37.116 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:37.126 [Pipeline] sh 00:01:37.402 + cat autorun-spdk.conf 00:01:37.402 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.402 SPDK_TEST_NVMF=1 00:01:37.402 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.402 SPDK_TEST_URING=1 00:01:37.402 SPDK_TEST_USDT=1 00:01:37.402 SPDK_RUN_UBSAN=1 00:01:37.402 NET_TYPE=virt 00:01:37.402 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.409 RUN_NIGHTLY=0 00:01:37.411 [Pipeline] } 00:01:37.428 [Pipeline] // stage 00:01:37.442 [Pipeline] stage 00:01:37.444 [Pipeline] { (Run VM) 00:01:37.458 [Pipeline] sh 00:01:37.742 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:37.742 + echo 'Start stage prepare_nvme.sh' 00:01:37.742 Start stage prepare_nvme.sh 00:01:37.742 + [[ -n 6 ]] 00:01:37.742 + disk_prefix=ex6 00:01:37.742 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:37.742 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:37.742 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:37.742 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.742 ++ SPDK_TEST_NVMF=1 00:01:37.742 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.742 ++ SPDK_TEST_URING=1 00:01:37.742 ++ SPDK_TEST_USDT=1 00:01:37.742 ++ SPDK_RUN_UBSAN=1 00:01:37.742 ++ NET_TYPE=virt 00:01:37.742 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:37.742 ++ RUN_NIGHTLY=0 00:01:37.742 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:37.742 + nvme_files=() 00:01:37.742 + declare -A nvme_files 00:01:37.742 + backend_dir=/var/lib/libvirt/images/backends 00:01:37.742 + nvme_files['nvme.img']=5G 00:01:37.742 + nvme_files['nvme-cmb.img']=5G 00:01:37.742 + nvme_files['nvme-multi0.img']=4G 00:01:37.742 + nvme_files['nvme-multi1.img']=4G 00:01:37.742 + nvme_files['nvme-multi2.img']=4G 00:01:37.742 + nvme_files['nvme-openstack.img']=8G 00:01:37.742 + nvme_files['nvme-zns.img']=5G 00:01:37.742 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:37.742 + (( SPDK_TEST_FTL == 1 )) 00:01:37.742 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:37.742 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:37.742 + for nvme in "${!nvme_files[@]}" 00:01:37.742 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:37.742 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.742 + for nvme in "${!nvme_files[@]}" 00:01:37.742 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:37.742 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.742 + for nvme in "${!nvme_files[@]}" 00:01:37.742 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:37.742 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:37.742 + for nvme in "${!nvme_files[@]}" 00:01:37.742 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:37.742 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.742 + for nvme in "${!nvme_files[@]}" 00:01:37.742 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:37.742 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.742 + for nvme in "${!nvme_files[@]}" 00:01:37.742 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:37.742 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:37.742 + for nvme in "${!nvme_files[@]}" 00:01:37.742 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:37.742 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:37.742 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:37.742 + echo 'End stage prepare_nvme.sh' 00:01:37.742 End stage prepare_nvme.sh 00:01:37.804 [Pipeline] sh 00:01:38.093 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:38.093 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora38 00:01:38.093 00:01:38.093 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:38.093 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:38.093 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:38.093 HELP=0 00:01:38.093 DRY_RUN=0 00:01:38.093 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:38.093 NVME_DISKS_TYPE=nvme,nvme, 00:01:38.093 NVME_AUTO_CREATE=0 00:01:38.093 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:38.093 NVME_CMB=,, 00:01:38.093 NVME_PMR=,, 00:01:38.093 NVME_ZNS=,, 00:01:38.093 NVME_MS=,, 00:01:38.093 NVME_FDP=,, 00:01:38.093 SPDK_VAGRANT_DISTRO=fedora38 00:01:38.093 SPDK_VAGRANT_VMCPU=10 00:01:38.093 SPDK_VAGRANT_VMRAM=12288 00:01:38.093 SPDK_VAGRANT_PROVIDER=libvirt 00:01:38.093 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:38.093 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:38.093 SPDK_OPENSTACK_NETWORK=0 00:01:38.093 VAGRANT_PACKAGE_BOX=0 00:01:38.093 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:38.093 FORCE_DISTRO=true 00:01:38.093 VAGRANT_BOX_VERSION= 00:01:38.093 EXTRA_VAGRANTFILES= 00:01:38.093 NIC_MODEL=e1000 00:01:38.093 00:01:38.093 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:38.093 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:41.378 Bringing machine 'default' up with 'libvirt' provider... 00:01:41.637 ==> default: Creating image (snapshot of base box volume). 00:01:41.637 ==> default: Creating domain with the following settings... 00:01:41.637 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721850009_01565fe3416e076af4d7 00:01:41.637 ==> default: -- Domain type: kvm 00:01:41.637 ==> default: -- Cpus: 10 00:01:41.637 ==> default: -- Feature: acpi 00:01:41.637 ==> default: -- Feature: apic 00:01:41.637 ==> default: -- Feature: pae 00:01:41.637 ==> default: -- Memory: 12288M 00:01:41.637 ==> default: -- Memory Backing: hugepages: 00:01:41.637 ==> default: -- Management MAC: 00:01:41.637 ==> default: -- Loader: 00:01:41.637 ==> default: -- Nvram: 00:01:41.637 ==> default: -- Base box: spdk/fedora38 00:01:41.637 ==> default: -- Storage pool: default 00:01:41.637 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721850009_01565fe3416e076af4d7.img (20G) 00:01:41.637 ==> default: -- Volume Cache: default 00:01:41.637 ==> default: -- Kernel: 00:01:41.637 ==> default: -- Initrd: 00:01:41.637 ==> default: -- Graphics Type: vnc 00:01:41.637 ==> default: -- Graphics Port: -1 00:01:41.637 ==> default: -- Graphics IP: 127.0.0.1 00:01:41.637 ==> default: -- Graphics Password: Not defined 00:01:41.637 ==> default: -- Video Type: cirrus 00:01:41.637 ==> default: -- Video VRAM: 9216 00:01:41.637 ==> default: -- Sound Type: 00:01:41.637 ==> default: -- Keymap: en-us 00:01:41.637 ==> default: -- TPM Path: 00:01:41.637 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:41.637 ==> default: -- Command line args: 00:01:41.637 ==> default: -> value=-device, 00:01:41.637 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:41.637 ==> default: -> value=-drive, 00:01:41.637 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:41.637 ==> default: -> value=-device, 00:01:41.637 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.637 ==> default: -> value=-device, 00:01:41.637 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:41.637 ==> default: -> value=-drive, 00:01:41.637 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:41.637 ==> default: -> value=-device, 00:01:41.637 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.637 ==> default: -> value=-drive, 00:01:41.637 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:41.637 ==> default: -> value=-device, 00:01:41.637 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.637 ==> default: -> value=-drive, 00:01:41.637 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:41.637 ==> default: -> value=-device, 00:01:41.637 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:41.896 ==> default: Creating shared folders metadata... 00:01:41.896 ==> default: Starting domain. 00:01:43.272 ==> default: Waiting for domain to get an IP address... 00:02:01.352 ==> default: Waiting for SSH to become available... 00:02:01.352 ==> default: Configuring and enabling network interfaces... 00:02:03.884 default: SSH address: 192.168.121.210:22 00:02:03.884 default: SSH username: vagrant 00:02:03.884 default: SSH auth method: private key 00:02:06.412 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:14.524 ==> default: Mounting SSHFS shared folder... 00:02:15.090 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.090 ==> default: Checking Mount.. 00:02:16.460 ==> default: Folder Successfully Mounted! 00:02:16.460 ==> default: Running provisioner: file... 00:02:17.396 default: ~/.gitconfig => .gitconfig 00:02:17.653 00:02:17.653 SUCCESS! 00:02:17.653 00:02:17.653 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:17.653 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:17.653 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:17.653 00:02:17.662 [Pipeline] } 00:02:17.676 [Pipeline] // stage 00:02:17.682 [Pipeline] dir 00:02:17.682 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:17.684 [Pipeline] { 00:02:17.694 [Pipeline] catchError 00:02:17.695 [Pipeline] { 00:02:17.703 [Pipeline] sh 00:02:17.990 + vagrant ssh-config --host vagrant 00:02:17.990 + sed -ne /^Host/,$p 00:02:17.990 + tee ssh_conf 00:02:21.316 Host vagrant 00:02:21.316 HostName 192.168.121.210 00:02:21.316 User vagrant 00:02:21.316 Port 22 00:02:21.316 UserKnownHostsFile /dev/null 00:02:21.316 StrictHostKeyChecking no 00:02:21.316 PasswordAuthentication no 00:02:21.316 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:21.316 IdentitiesOnly yes 00:02:21.316 LogLevel FATAL 00:02:21.316 ForwardAgent yes 00:02:21.316 ForwardX11 yes 00:02:21.316 00:02:21.335 [Pipeline] withEnv 00:02:21.337 [Pipeline] { 00:02:21.349 [Pipeline] sh 00:02:21.626 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:21.626 source /etc/os-release 00:02:21.626 [[ -e /image.version ]] && img=$(< /image.version) 00:02:21.626 # Minimal, systemd-like check. 00:02:21.626 if [[ -e /.dockerenv ]]; then 00:02:21.626 # Clear garbage from the node's name: 00:02:21.626 # agt-er_autotest_547-896 -> autotest_547-896 00:02:21.626 # $HOSTNAME is the actual container id 00:02:21.626 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:21.626 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:21.626 # We can assume this is a mount from a host where container is running, 00:02:21.626 # so fetch its hostname to easily identify the target swarm worker. 00:02:21.626 container="$(< /etc/hostname) ($agent)" 00:02:21.626 else 00:02:21.626 # Fallback 00:02:21.626 container=$agent 00:02:21.626 fi 00:02:21.626 fi 00:02:21.626 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:21.626 00:02:21.891 [Pipeline] } 00:02:21.902 [Pipeline] // withEnv 00:02:21.907 [Pipeline] setCustomBuildProperty 00:02:21.916 [Pipeline] stage 00:02:21.918 [Pipeline] { (Tests) 00:02:21.932 [Pipeline] sh 00:02:22.205 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:22.473 [Pipeline] sh 00:02:22.753 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:22.766 [Pipeline] timeout 00:02:22.766 Timeout set to expire in 30 min 00:02:22.767 [Pipeline] { 00:02:22.780 [Pipeline] sh 00:02:23.058 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:23.625 HEAD is now at 0c322284f scripts/nvmf_perf: move SPDK target specific parameters 00:02:23.894 [Pipeline] sh 00:02:24.173 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:24.442 [Pipeline] sh 00:02:24.718 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:24.733 [Pipeline] sh 00:02:25.032 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:25.032 ++ readlink -f spdk_repo 00:02:25.032 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:25.032 + [[ -n /home/vagrant/spdk_repo ]] 00:02:25.032 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:25.032 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:25.032 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:25.032 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:25.032 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:25.032 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:25.032 + cd /home/vagrant/spdk_repo 00:02:25.032 + source /etc/os-release 00:02:25.032 ++ NAME='Fedora Linux' 00:02:25.032 ++ VERSION='38 (Cloud Edition)' 00:02:25.032 ++ ID=fedora 00:02:25.032 ++ VERSION_ID=38 00:02:25.032 ++ VERSION_CODENAME= 00:02:25.032 ++ PLATFORM_ID=platform:f38 00:02:25.032 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:25.032 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:25.032 ++ LOGO=fedora-logo-icon 00:02:25.032 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:25.032 ++ HOME_URL=https://fedoraproject.org/ 00:02:25.032 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:25.032 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:25.032 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:25.032 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:25.032 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:25.032 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:25.032 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:25.032 ++ SUPPORT_END=2024-05-14 00:02:25.032 ++ VARIANT='Cloud Edition' 00:02:25.032 ++ VARIANT_ID=cloud 00:02:25.032 + uname -a 00:02:25.032 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:25.032 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:25.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:25.600 Hugepages 00:02:25.600 node hugesize free / total 00:02:25.600 node0 1048576kB 0 / 0 00:02:25.600 node0 2048kB 0 / 0 00:02:25.600 00:02:25.600 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.600 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:25.600 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:25.600 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:25.600 + rm -f /tmp/spdk-ld-path 00:02:25.600 + source autorun-spdk.conf 00:02:25.600 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.600 ++ SPDK_TEST_NVMF=1 00:02:25.600 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.600 ++ SPDK_TEST_URING=1 00:02:25.600 ++ SPDK_TEST_USDT=1 00:02:25.600 ++ SPDK_RUN_UBSAN=1 00:02:25.600 ++ NET_TYPE=virt 00:02:25.600 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.600 ++ RUN_NIGHTLY=0 00:02:25.600 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:25.600 + [[ -n '' ]] 00:02:25.600 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:25.600 + for M in /var/spdk/build-*-manifest.txt 00:02:25.600 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:25.600 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.600 + for M in /var/spdk/build-*-manifest.txt 00:02:25.600 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:25.600 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:25.600 ++ uname 00:02:25.600 + [[ Linux == \L\i\n\u\x ]] 00:02:25.600 + sudo dmesg -T 00:02:25.861 + sudo dmesg --clear 00:02:25.861 + dmesg_pid=5094 00:02:25.861 + sudo dmesg -Tw 00:02:25.861 + [[ Fedora Linux == FreeBSD ]] 00:02:25.861 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.861 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:25.861 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:25.861 + [[ -x /usr/src/fio-static/fio ]] 00:02:25.861 + export FIO_BIN=/usr/src/fio-static/fio 00:02:25.861 + FIO_BIN=/usr/src/fio-static/fio 00:02:25.861 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:25.861 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:25.861 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:25.861 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.861 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:25.861 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:25.861 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.861 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:25.861 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:25.861 Test configuration: 00:02:25.861 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.861 SPDK_TEST_NVMF=1 00:02:25.861 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.861 SPDK_TEST_URING=1 00:02:25.861 SPDK_TEST_USDT=1 00:02:25.861 SPDK_RUN_UBSAN=1 00:02:25.861 NET_TYPE=virt 00:02:25.861 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:25.861 RUN_NIGHTLY=0 19:40:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:25.861 19:40:54 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:25.861 19:40:54 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.861 19:40:54 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.861 19:40:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.861 19:40:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.861 19:40:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.861 19:40:54 -- paths/export.sh@5 -- $ export PATH 00:02:25.861 19:40:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.861 19:40:54 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:25.861 19:40:54 -- common/autobuild_common.sh@447 -- $ date +%s 00:02:25.861 19:40:54 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721850054.XXXXXX 00:02:25.861 19:40:54 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721850054.4URRVx 00:02:25.861 19:40:54 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:02:25.861 19:40:54 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:02:25.861 19:40:54 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:25.861 19:40:54 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:25.861 19:40:54 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:25.861 19:40:54 -- common/autobuild_common.sh@463 -- $ get_config_params 00:02:25.861 19:40:54 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:02:25.861 19:40:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.861 19:40:54 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:25.861 19:40:54 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:02:25.861 19:40:54 -- pm/common@17 -- $ local monitor 00:02:25.861 19:40:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.861 19:40:54 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.861 19:40:54 -- pm/common@25 -- $ sleep 1 00:02:25.861 19:40:54 -- pm/common@21 -- $ date +%s 00:02:25.861 19:40:54 -- pm/common@21 -- $ date +%s 00:02:25.861 19:40:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721850054 00:02:25.861 19:40:54 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721850054 00:02:25.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721850054_collect-vmstat.pm.log 00:02:25.861 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721850054_collect-cpu-load.pm.log 00:02:26.793 19:40:55 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:02:26.793 19:40:55 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:26.793 19:40:55 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:26.793 19:40:55 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:26.793 19:40:55 -- spdk/autobuild.sh@16 -- $ date -u 00:02:26.793 Wed Jul 24 07:40:55 PM UTC 2024 00:02:26.793 19:40:55 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.050 v24.09-pre-317-g0c322284f 00:02:27.050 19:40:55 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:27.050 19:40:55 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.050 19:40:55 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.050 19:40:55 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:27.050 19:40:55 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:27.050 19:40:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.050 ************************************ 00:02:27.050 START TEST ubsan 00:02:27.050 ************************************ 00:02:27.050 using ubsan 00:02:27.050 19:40:55 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:27.050 00:02:27.050 real 0m0.000s 00:02:27.050 user 0m0.000s 00:02:27.050 sys 0m0.000s 00:02:27.050 19:40:55 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:27.050 19:40:55 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.050 ************************************ 00:02:27.050 END TEST ubsan 00:02:27.050 ************************************ 00:02:27.050 19:40:55 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:27.050 19:40:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:27.050 19:40:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:27.050 19:40:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:27.050 19:40:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:27.050 19:40:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:27.050 19:40:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:27.050 19:40:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:27.050 19:40:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:27.050 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:27.050 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:27.614 Using 'verbs' RDMA provider 00:02:40.747 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:55.616 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:55.616 Creating mk/config.mk...done. 00:02:55.616 Creating mk/cc.flags.mk...done. 00:02:55.616 Type 'make' to build. 00:02:55.616 19:41:22 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:55.616 19:41:22 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:55.616 19:41:22 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:55.616 19:41:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.616 ************************************ 00:02:55.616 START TEST make 00:02:55.616 ************************************ 00:02:55.616 19:41:22 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:55.616 make[1]: Nothing to be done for 'all'. 00:03:05.604 The Meson build system 00:03:05.604 Version: 1.3.1 00:03:05.604 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:05.604 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:05.604 Build type: native build 00:03:05.604 Program cat found: YES (/usr/bin/cat) 00:03:05.604 Project name: DPDK 00:03:05.604 Project version: 24.03.0 00:03:05.604 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:05.604 C linker for the host machine: cc ld.bfd 2.39-16 00:03:05.604 Host machine cpu family: x86_64 00:03:05.604 Host machine cpu: x86_64 00:03:05.604 Message: ## Building in Developer Mode ## 00:03:05.605 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:05.605 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:05.605 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:05.605 Program python3 found: YES (/usr/bin/python3) 00:03:05.605 Program cat found: YES (/usr/bin/cat) 00:03:05.605 Compiler for C supports arguments -march=native: YES 00:03:05.605 Checking for size of "void *" : 8 00:03:05.605 Checking for size of "void *" : 8 (cached) 00:03:05.605 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:05.605 Library m found: YES 00:03:05.605 Library numa found: YES 00:03:05.605 Has header "numaif.h" : YES 00:03:05.605 Library fdt found: NO 00:03:05.605 Library execinfo found: NO 00:03:05.605 Has header "execinfo.h" : YES 00:03:05.605 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:05.605 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:05.605 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:05.605 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:05.605 Run-time dependency openssl found: YES 3.0.9 00:03:05.605 Run-time dependency libpcap found: YES 1.10.4 00:03:05.605 Has header "pcap.h" with dependency libpcap: YES 00:03:05.605 Compiler for C supports arguments -Wcast-qual: YES 00:03:05.605 Compiler for C supports arguments -Wdeprecated: YES 00:03:05.605 Compiler for C supports arguments -Wformat: YES 00:03:05.605 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:05.605 Compiler for C supports arguments -Wformat-security: NO 00:03:05.605 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.605 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:05.605 Compiler for C supports arguments -Wnested-externs: YES 00:03:05.605 Compiler for C supports arguments -Wold-style-definition: YES 00:03:05.605 Compiler for C supports arguments -Wpointer-arith: YES 00:03:05.605 Compiler for C supports arguments -Wsign-compare: YES 00:03:05.605 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:05.605 Compiler for C supports arguments -Wundef: YES 00:03:05.605 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.605 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:05.605 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:05.605 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.605 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:05.605 Program objdump found: YES (/usr/bin/objdump) 00:03:05.605 Compiler for C supports arguments -mavx512f: YES 00:03:05.605 Checking if "AVX512 checking" compiles: YES 00:03:05.605 Fetching value of define "__SSE4_2__" : 1 00:03:05.605 Fetching value of define "__AES__" : 1 00:03:05.605 Fetching value of define "__AVX__" : 1 00:03:05.605 Fetching value of define "__AVX2__" : 1 00:03:05.605 Fetching value of define "__AVX512BW__" : (undefined) 00:03:05.605 Fetching value of define "__AVX512CD__" : (undefined) 00:03:05.605 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:05.605 Fetching value of define "__AVX512F__" : (undefined) 00:03:05.605 Fetching value of define "__AVX512VL__" : (undefined) 00:03:05.605 Fetching value of define "__PCLMUL__" : 1 00:03:05.605 Fetching value of define "__RDRND__" : 1 00:03:05.605 Fetching value of define "__RDSEED__" : 1 00:03:05.605 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:05.605 Fetching value of define "__znver1__" : (undefined) 00:03:05.605 Fetching value of define "__znver2__" : (undefined) 00:03:05.605 Fetching value of define "__znver3__" : (undefined) 00:03:05.605 Fetching value of define "__znver4__" : (undefined) 00:03:05.605 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:05.605 Message: lib/log: Defining dependency "log" 00:03:05.605 Message: lib/kvargs: Defining dependency "kvargs" 00:03:05.605 Message: lib/telemetry: Defining dependency "telemetry" 00:03:05.605 Checking for function "getentropy" : NO 00:03:05.605 Message: lib/eal: Defining dependency "eal" 00:03:05.605 Message: lib/ring: Defining dependency "ring" 00:03:05.605 Message: lib/rcu: Defining dependency "rcu" 00:03:05.605 Message: lib/mempool: Defining dependency "mempool" 00:03:05.605 Message: lib/mbuf: Defining dependency "mbuf" 00:03:05.605 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:05.605 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:05.605 Compiler for C supports arguments -mpclmul: YES 00:03:05.605 Compiler for C supports arguments -maes: YES 00:03:05.605 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:05.605 Compiler for C supports arguments -mavx512bw: YES 00:03:05.605 Compiler for C supports arguments -mavx512dq: YES 00:03:05.605 Compiler for C supports arguments -mavx512vl: YES 00:03:05.605 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:05.605 Compiler for C supports arguments -mavx2: YES 00:03:05.605 Compiler for C supports arguments -mavx: YES 00:03:05.605 Message: lib/net: Defining dependency "net" 00:03:05.605 Message: lib/meter: Defining dependency "meter" 00:03:05.605 Message: lib/ethdev: Defining dependency "ethdev" 00:03:05.605 Message: lib/pci: Defining dependency "pci" 00:03:05.605 Message: lib/cmdline: Defining dependency "cmdline" 00:03:05.605 Message: lib/hash: Defining dependency "hash" 00:03:05.605 Message: lib/timer: Defining dependency "timer" 00:03:05.605 Message: lib/compressdev: Defining dependency "compressdev" 00:03:05.605 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:05.605 Message: lib/dmadev: Defining dependency "dmadev" 00:03:05.605 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:05.605 Message: lib/power: Defining dependency "power" 00:03:05.605 Message: lib/reorder: Defining dependency "reorder" 00:03:05.605 Message: lib/security: Defining dependency "security" 00:03:05.605 Has header "linux/userfaultfd.h" : YES 00:03:05.605 Has header "linux/vduse.h" : YES 00:03:05.605 Message: lib/vhost: Defining dependency "vhost" 00:03:05.605 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:05.605 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:05.605 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:05.605 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:05.605 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:05.605 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:05.605 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:05.605 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:05.605 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:05.605 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:05.605 Program doxygen found: YES (/usr/bin/doxygen) 00:03:05.605 Configuring doxy-api-html.conf using configuration 00:03:05.605 Configuring doxy-api-man.conf using configuration 00:03:05.605 Program mandb found: YES (/usr/bin/mandb) 00:03:05.605 Program sphinx-build found: NO 00:03:05.605 Configuring rte_build_config.h using configuration 00:03:05.605 Message: 00:03:05.605 ================= 00:03:05.605 Applications Enabled 00:03:05.605 ================= 00:03:05.605 00:03:05.605 apps: 00:03:05.605 00:03:05.605 00:03:05.605 Message: 00:03:05.605 ================= 00:03:05.605 Libraries Enabled 00:03:05.605 ================= 00:03:05.605 00:03:05.605 libs: 00:03:05.605 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:05.605 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:05.605 cryptodev, dmadev, power, reorder, security, vhost, 00:03:05.605 00:03:05.605 Message: 00:03:05.605 =============== 00:03:05.605 Drivers Enabled 00:03:05.605 =============== 00:03:05.605 00:03:05.605 common: 00:03:05.605 00:03:05.605 bus: 00:03:05.605 pci, vdev, 00:03:05.605 mempool: 00:03:05.605 ring, 00:03:05.605 dma: 00:03:05.605 00:03:05.605 net: 00:03:05.605 00:03:05.605 crypto: 00:03:05.605 00:03:05.605 compress: 00:03:05.605 00:03:05.605 vdpa: 00:03:05.605 00:03:05.605 00:03:05.605 Message: 00:03:05.605 ================= 00:03:05.605 Content Skipped 00:03:05.605 ================= 00:03:05.605 00:03:05.605 apps: 00:03:05.605 dumpcap: explicitly disabled via build config 00:03:05.605 graph: explicitly disabled via build config 00:03:05.605 pdump: explicitly disabled via build config 00:03:05.605 proc-info: explicitly disabled via build config 00:03:05.605 test-acl: explicitly disabled via build config 00:03:05.605 test-bbdev: explicitly disabled via build config 00:03:05.605 test-cmdline: explicitly disabled via build config 00:03:05.605 test-compress-perf: explicitly disabled via build config 00:03:05.605 test-crypto-perf: explicitly disabled via build config 00:03:05.605 test-dma-perf: explicitly disabled via build config 00:03:05.605 test-eventdev: explicitly disabled via build config 00:03:05.605 test-fib: explicitly disabled via build config 00:03:05.605 test-flow-perf: explicitly disabled via build config 00:03:05.605 test-gpudev: explicitly disabled via build config 00:03:05.605 test-mldev: explicitly disabled via build config 00:03:05.605 test-pipeline: explicitly disabled via build config 00:03:05.605 test-pmd: explicitly disabled via build config 00:03:05.605 test-regex: explicitly disabled via build config 00:03:05.605 test-sad: explicitly disabled via build config 00:03:05.605 test-security-perf: explicitly disabled via build config 00:03:05.605 00:03:05.605 libs: 00:03:05.605 argparse: explicitly disabled via build config 00:03:05.605 metrics: explicitly disabled via build config 00:03:05.605 acl: explicitly disabled via build config 00:03:05.605 bbdev: explicitly disabled via build config 00:03:05.605 bitratestats: explicitly disabled via build config 00:03:05.605 bpf: explicitly disabled via build config 00:03:05.605 cfgfile: explicitly disabled via build config 00:03:05.605 distributor: explicitly disabled via build config 00:03:05.605 efd: explicitly disabled via build config 00:03:05.605 eventdev: explicitly disabled via build config 00:03:05.605 dispatcher: explicitly disabled via build config 00:03:05.606 gpudev: explicitly disabled via build config 00:03:05.606 gro: explicitly disabled via build config 00:03:05.606 gso: explicitly disabled via build config 00:03:05.606 ip_frag: explicitly disabled via build config 00:03:05.606 jobstats: explicitly disabled via build config 00:03:05.606 latencystats: explicitly disabled via build config 00:03:05.606 lpm: explicitly disabled via build config 00:03:05.606 member: explicitly disabled via build config 00:03:05.606 pcapng: explicitly disabled via build config 00:03:05.606 rawdev: explicitly disabled via build config 00:03:05.606 regexdev: explicitly disabled via build config 00:03:05.606 mldev: explicitly disabled via build config 00:03:05.606 rib: explicitly disabled via build config 00:03:05.606 sched: explicitly disabled via build config 00:03:05.606 stack: explicitly disabled via build config 00:03:05.606 ipsec: explicitly disabled via build config 00:03:05.606 pdcp: explicitly disabled via build config 00:03:05.606 fib: explicitly disabled via build config 00:03:05.606 port: explicitly disabled via build config 00:03:05.606 pdump: explicitly disabled via build config 00:03:05.606 table: explicitly disabled via build config 00:03:05.606 pipeline: explicitly disabled via build config 00:03:05.606 graph: explicitly disabled via build config 00:03:05.606 node: explicitly disabled via build config 00:03:05.606 00:03:05.606 drivers: 00:03:05.606 common/cpt: not in enabled drivers build config 00:03:05.606 common/dpaax: not in enabled drivers build config 00:03:05.606 common/iavf: not in enabled drivers build config 00:03:05.606 common/idpf: not in enabled drivers build config 00:03:05.606 common/ionic: not in enabled drivers build config 00:03:05.606 common/mvep: not in enabled drivers build config 00:03:05.606 common/octeontx: not in enabled drivers build config 00:03:05.606 bus/auxiliary: not in enabled drivers build config 00:03:05.606 bus/cdx: not in enabled drivers build config 00:03:05.606 bus/dpaa: not in enabled drivers build config 00:03:05.606 bus/fslmc: not in enabled drivers build config 00:03:05.606 bus/ifpga: not in enabled drivers build config 00:03:05.606 bus/platform: not in enabled drivers build config 00:03:05.606 bus/uacce: not in enabled drivers build config 00:03:05.606 bus/vmbus: not in enabled drivers build config 00:03:05.606 common/cnxk: not in enabled drivers build config 00:03:05.606 common/mlx5: not in enabled drivers build config 00:03:05.606 common/nfp: not in enabled drivers build config 00:03:05.606 common/nitrox: not in enabled drivers build config 00:03:05.606 common/qat: not in enabled drivers build config 00:03:05.606 common/sfc_efx: not in enabled drivers build config 00:03:05.606 mempool/bucket: not in enabled drivers build config 00:03:05.606 mempool/cnxk: not in enabled drivers build config 00:03:05.606 mempool/dpaa: not in enabled drivers build config 00:03:05.606 mempool/dpaa2: not in enabled drivers build config 00:03:05.606 mempool/octeontx: not in enabled drivers build config 00:03:05.606 mempool/stack: not in enabled drivers build config 00:03:05.606 dma/cnxk: not in enabled drivers build config 00:03:05.606 dma/dpaa: not in enabled drivers build config 00:03:05.606 dma/dpaa2: not in enabled drivers build config 00:03:05.606 dma/hisilicon: not in enabled drivers build config 00:03:05.606 dma/idxd: not in enabled drivers build config 00:03:05.606 dma/ioat: not in enabled drivers build config 00:03:05.606 dma/skeleton: not in enabled drivers build config 00:03:05.606 net/af_packet: not in enabled drivers build config 00:03:05.606 net/af_xdp: not in enabled drivers build config 00:03:05.606 net/ark: not in enabled drivers build config 00:03:05.606 net/atlantic: not in enabled drivers build config 00:03:05.606 net/avp: not in enabled drivers build config 00:03:05.606 net/axgbe: not in enabled drivers build config 00:03:05.606 net/bnx2x: not in enabled drivers build config 00:03:05.606 net/bnxt: not in enabled drivers build config 00:03:05.606 net/bonding: not in enabled drivers build config 00:03:05.606 net/cnxk: not in enabled drivers build config 00:03:05.606 net/cpfl: not in enabled drivers build config 00:03:05.606 net/cxgbe: not in enabled drivers build config 00:03:05.606 net/dpaa: not in enabled drivers build config 00:03:05.606 net/dpaa2: not in enabled drivers build config 00:03:05.606 net/e1000: not in enabled drivers build config 00:03:05.606 net/ena: not in enabled drivers build config 00:03:05.606 net/enetc: not in enabled drivers build config 00:03:05.606 net/enetfec: not in enabled drivers build config 00:03:05.606 net/enic: not in enabled drivers build config 00:03:05.606 net/failsafe: not in enabled drivers build config 00:03:05.606 net/fm10k: not in enabled drivers build config 00:03:05.606 net/gve: not in enabled drivers build config 00:03:05.606 net/hinic: not in enabled drivers build config 00:03:05.606 net/hns3: not in enabled drivers build config 00:03:05.606 net/i40e: not in enabled drivers build config 00:03:05.606 net/iavf: not in enabled drivers build config 00:03:05.606 net/ice: not in enabled drivers build config 00:03:05.606 net/idpf: not in enabled drivers build config 00:03:05.606 net/igc: not in enabled drivers build config 00:03:05.606 net/ionic: not in enabled drivers build config 00:03:05.606 net/ipn3ke: not in enabled drivers build config 00:03:05.606 net/ixgbe: not in enabled drivers build config 00:03:05.606 net/mana: not in enabled drivers build config 00:03:05.606 net/memif: not in enabled drivers build config 00:03:05.606 net/mlx4: not in enabled drivers build config 00:03:05.606 net/mlx5: not in enabled drivers build config 00:03:05.606 net/mvneta: not in enabled drivers build config 00:03:05.606 net/mvpp2: not in enabled drivers build config 00:03:05.606 net/netvsc: not in enabled drivers build config 00:03:05.606 net/nfb: not in enabled drivers build config 00:03:05.606 net/nfp: not in enabled drivers build config 00:03:05.606 net/ngbe: not in enabled drivers build config 00:03:05.606 net/null: not in enabled drivers build config 00:03:05.606 net/octeontx: not in enabled drivers build config 00:03:05.606 net/octeon_ep: not in enabled drivers build config 00:03:05.606 net/pcap: not in enabled drivers build config 00:03:05.606 net/pfe: not in enabled drivers build config 00:03:05.606 net/qede: not in enabled drivers build config 00:03:05.606 net/ring: not in enabled drivers build config 00:03:05.606 net/sfc: not in enabled drivers build config 00:03:05.606 net/softnic: not in enabled drivers build config 00:03:05.606 net/tap: not in enabled drivers build config 00:03:05.606 net/thunderx: not in enabled drivers build config 00:03:05.606 net/txgbe: not in enabled drivers build config 00:03:05.606 net/vdev_netvsc: not in enabled drivers build config 00:03:05.606 net/vhost: not in enabled drivers build config 00:03:05.606 net/virtio: not in enabled drivers build config 00:03:05.606 net/vmxnet3: not in enabled drivers build config 00:03:05.606 raw/*: missing internal dependency, "rawdev" 00:03:05.606 crypto/armv8: not in enabled drivers build config 00:03:05.606 crypto/bcmfs: not in enabled drivers build config 00:03:05.606 crypto/caam_jr: not in enabled drivers build config 00:03:05.606 crypto/ccp: not in enabled drivers build config 00:03:05.606 crypto/cnxk: not in enabled drivers build config 00:03:05.606 crypto/dpaa_sec: not in enabled drivers build config 00:03:05.606 crypto/dpaa2_sec: not in enabled drivers build config 00:03:05.606 crypto/ipsec_mb: not in enabled drivers build config 00:03:05.606 crypto/mlx5: not in enabled drivers build config 00:03:05.606 crypto/mvsam: not in enabled drivers build config 00:03:05.606 crypto/nitrox: not in enabled drivers build config 00:03:05.606 crypto/null: not in enabled drivers build config 00:03:05.606 crypto/octeontx: not in enabled drivers build config 00:03:05.606 crypto/openssl: not in enabled drivers build config 00:03:05.606 crypto/scheduler: not in enabled drivers build config 00:03:05.606 crypto/uadk: not in enabled drivers build config 00:03:05.606 crypto/virtio: not in enabled drivers build config 00:03:05.606 compress/isal: not in enabled drivers build config 00:03:05.606 compress/mlx5: not in enabled drivers build config 00:03:05.606 compress/nitrox: not in enabled drivers build config 00:03:05.606 compress/octeontx: not in enabled drivers build config 00:03:05.606 compress/zlib: not in enabled drivers build config 00:03:05.606 regex/*: missing internal dependency, "regexdev" 00:03:05.606 ml/*: missing internal dependency, "mldev" 00:03:05.606 vdpa/ifc: not in enabled drivers build config 00:03:05.606 vdpa/mlx5: not in enabled drivers build config 00:03:05.606 vdpa/nfp: not in enabled drivers build config 00:03:05.606 vdpa/sfc: not in enabled drivers build config 00:03:05.606 event/*: missing internal dependency, "eventdev" 00:03:05.606 baseband/*: missing internal dependency, "bbdev" 00:03:05.606 gpu/*: missing internal dependency, "gpudev" 00:03:05.606 00:03:05.606 00:03:05.606 Build targets in project: 85 00:03:05.606 00:03:05.606 DPDK 24.03.0 00:03:05.606 00:03:05.606 User defined options 00:03:05.606 buildtype : debug 00:03:05.606 default_library : shared 00:03:05.606 libdir : lib 00:03:05.606 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:05.606 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:05.606 c_link_args : 00:03:05.606 cpu_instruction_set: native 00:03:05.606 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:05.606 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:05.606 enable_docs : false 00:03:05.606 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:05.606 enable_kmods : false 00:03:05.606 max_lcores : 128 00:03:05.606 tests : false 00:03:05.606 00:03:05.606 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.606 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:05.606 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:05.606 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:05.606 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:05.606 [4/268] Linking static target lib/librte_kvargs.a 00:03:05.607 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:05.607 [6/268] Linking static target lib/librte_log.a 00:03:05.607 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.607 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:05.607 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:05.607 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:05.865 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:05.865 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:05.865 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:05.865 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:05.865 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:05.865 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:05.865 [17/268] Linking static target lib/librte_telemetry.a 00:03:06.123 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.123 [19/268] Linking target lib/librte_log.so.24.1 00:03:06.123 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:06.381 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:06.381 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:06.381 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:06.641 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:06.641 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.641 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:06.641 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:06.641 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:06.641 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:06.641 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:06.641 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:06.941 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:06.941 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.941 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:07.207 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:07.207 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:07.466 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.725 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:07.725 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.725 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:07.725 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.725 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:07.725 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:07.725 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.725 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.983 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:07.983 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:07.983 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:08.242 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:08.242 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:08.501 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.760 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.760 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.760 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.760 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.760 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.760 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.760 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:09.019 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:09.019 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:09.019 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:09.278 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:09.536 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:09.536 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:09.795 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:09.795 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:09.795 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:09.795 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:10.052 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:10.052 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:10.052 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:10.311 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:10.311 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:10.311 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:10.311 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:10.570 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:10.570 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:10.570 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:10.570 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:10.570 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:10.829 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:11.087 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:11.087 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:11.087 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:11.345 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:11.345 [86/268] Linking static target lib/librte_eal.a 00:03:11.345 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:11.345 [88/268] Linking static target lib/librte_ring.a 00:03:11.603 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:11.603 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:11.603 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:11.603 [92/268] Linking static target lib/librte_rcu.a 00:03:11.603 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:11.603 [94/268] Linking static target lib/librte_mempool.a 00:03:11.862 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:11.862 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:11.862 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.122 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:12.122 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:12.122 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:12.122 [101/268] Linking static target lib/librte_mbuf.a 00:03:12.122 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:12.122 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.702 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:12.702 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:12.960 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:12.960 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:12.960 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:12.960 [109/268] Linking static target lib/librte_net.a 00:03:12.960 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.219 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:13.219 [112/268] Linking static target lib/librte_meter.a 00:03:13.477 [113/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.477 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:13.477 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.735 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.735 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:13.735 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:13.735 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:14.671 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:14.671 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:14.929 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:14.929 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:14.929 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:14.929 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:14.929 [126/268] Linking static target lib/librte_pci.a 00:03:14.929 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:15.186 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:15.186 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:15.186 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:15.186 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:15.186 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:15.186 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:15.186 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:15.444 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:15.444 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:15.444 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:15.444 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:15.444 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.444 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:15.444 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:15.444 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:15.444 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:15.444 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:15.702 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:15.702 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:15.960 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:15.960 [148/268] Linking static target lib/librte_ethdev.a 00:03:15.960 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:15.960 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:16.218 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:16.218 [152/268] Linking static target lib/librte_timer.a 00:03:16.218 [153/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:16.218 [154/268] Linking static target lib/librte_cmdline.a 00:03:16.476 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:16.476 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:16.476 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:16.476 [158/268] Linking static target lib/librte_hash.a 00:03:16.734 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:16.734 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:16.734 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.734 [162/268] Linking static target lib/librte_compressdev.a 00:03:16.992 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.992 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:16.992 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:17.250 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:17.507 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:17.507 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.508 [169/268] Linking static target lib/librte_dmadev.a 00:03:17.508 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:17.508 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:17.766 [172/268] Linking static target lib/librte_cryptodev.a 00:03:17.766 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:17.766 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:17.766 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.766 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.024 [177/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.024 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:18.282 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:18.282 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:18.282 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.282 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:18.282 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:18.540 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:18.800 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:18.800 [186/268] Linking static target lib/librte_power.a 00:03:18.800 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:18.800 [188/268] Linking static target lib/librte_reorder.a 00:03:19.058 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:19.058 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:19.058 [191/268] Linking static target lib/librte_security.a 00:03:19.058 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:19.316 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:19.316 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.316 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:19.883 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.141 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.141 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:20.141 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.141 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:20.141 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:20.141 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:20.707 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:20.707 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:20.707 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:20.707 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.707 [207/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.965 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.965 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.965 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.965 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.965 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.965 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.965 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.965 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.965 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:21.223 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:21.223 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.223 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.223 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:21.223 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.223 [222/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:21.223 [223/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:21.481 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:21.481 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.481 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.481 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:21.481 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.416 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:22.416 [230/268] Linking static target lib/librte_vhost.a 00:03:22.982 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.982 [232/268] Linking target lib/librte_eal.so.24.1 00:03:23.240 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:23.240 [234/268] Linking target lib/librte_meter.so.24.1 00:03:23.240 [235/268] Linking target lib/librte_ring.so.24.1 00:03:23.240 [236/268] Linking target lib/librte_pci.so.24.1 00:03:23.240 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:23.240 [238/268] Linking target lib/librte_timer.so.24.1 00:03:23.240 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:23.498 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:23.498 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:23.498 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:23.498 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:23.498 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:23.498 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:23.498 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:23.498 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:23.498 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:23.498 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:23.498 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:23.498 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:23.757 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:23.757 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.757 [254/268] Linking target lib/librte_reorder.so.24.1 00:03:23.757 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:23.757 [256/268] Linking target lib/librte_net.so.24.1 00:03:23.757 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:23.757 [258/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.015 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:24.015 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:24.015 [261/268] Linking target lib/librte_hash.so.24.1 00:03:24.015 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:24.015 [263/268] Linking target lib/librte_security.so.24.1 00:03:24.015 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:24.274 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:24.274 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:24.274 [267/268] Linking target lib/librte_power.so.24.1 00:03:24.274 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:24.274 INFO: autodetecting backend as ninja 00:03:24.274 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:25.658 CC lib/ut/ut.o 00:03:25.658 CC lib/ut_mock/mock.o 00:03:25.658 CC lib/log/log.o 00:03:25.658 CC lib/log/log_flags.o 00:03:25.658 CC lib/log/log_deprecated.o 00:03:25.658 LIB libspdk_ut.a 00:03:25.658 LIB libspdk_ut_mock.a 00:03:25.658 SO libspdk_ut_mock.so.6.0 00:03:25.658 SO libspdk_ut.so.2.0 00:03:25.658 LIB libspdk_log.a 00:03:25.658 SYMLINK libspdk_ut_mock.so 00:03:25.658 SO libspdk_log.so.7.0 00:03:25.658 SYMLINK libspdk_ut.so 00:03:25.917 SYMLINK libspdk_log.so 00:03:25.917 CC lib/util/base64.o 00:03:25.917 CC lib/util/bit_array.o 00:03:25.917 CC lib/ioat/ioat.o 00:03:25.917 CC lib/util/cpuset.o 00:03:25.917 CXX lib/trace_parser/trace.o 00:03:25.917 CC lib/util/crc16.o 00:03:25.917 CC lib/util/crc32.o 00:03:25.917 CC lib/dma/dma.o 00:03:25.917 CC lib/util/crc32c.o 00:03:26.175 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.175 CC lib/util/crc32_ieee.o 00:03:26.175 CC lib/util/crc64.o 00:03:26.175 CC lib/util/dif.o 00:03:26.175 CC lib/util/fd.o 00:03:26.175 CC lib/util/fd_group.o 00:03:26.175 LIB libspdk_dma.a 00:03:26.175 CC lib/util/file.o 00:03:26.433 SO libspdk_dma.so.4.0 00:03:26.433 CC lib/util/hexlify.o 00:03:26.433 CC lib/util/iov.o 00:03:26.433 CC lib/vfio_user/host/vfio_user.o 00:03:26.433 SYMLINK libspdk_dma.so 00:03:26.433 LIB libspdk_ioat.a 00:03:26.433 CC lib/util/math.o 00:03:26.433 SO libspdk_ioat.so.7.0 00:03:26.433 CC lib/util/net.o 00:03:26.433 CC lib/util/pipe.o 00:03:26.433 SYMLINK libspdk_ioat.so 00:03:26.433 CC lib/util/strerror_tls.o 00:03:26.433 CC lib/util/string.o 00:03:26.433 CC lib/util/uuid.o 00:03:26.433 CC lib/util/xor.o 00:03:26.433 CC lib/util/zipf.o 00:03:26.691 LIB libspdk_vfio_user.a 00:03:26.691 SO libspdk_vfio_user.so.5.0 00:03:26.691 SYMLINK libspdk_vfio_user.so 00:03:26.691 LIB libspdk_util.a 00:03:26.949 SO libspdk_util.so.10.0 00:03:26.949 LIB libspdk_trace_parser.a 00:03:26.949 SYMLINK libspdk_util.so 00:03:27.207 SO libspdk_trace_parser.so.5.0 00:03:27.207 SYMLINK libspdk_trace_parser.so 00:03:27.207 CC lib/conf/conf.o 00:03:27.207 CC lib/rdma_provider/common.o 00:03:27.207 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:27.207 CC lib/json/json_parse.o 00:03:27.207 CC lib/rdma_utils/rdma_utils.o 00:03:27.207 CC lib/json/json_util.o 00:03:27.207 CC lib/env_dpdk/env.o 00:03:27.207 CC lib/idxd/idxd.o 00:03:27.207 CC lib/idxd/idxd_user.o 00:03:27.207 CC lib/vmd/vmd.o 00:03:27.464 LIB libspdk_rdma_provider.a 00:03:27.464 CC lib/idxd/idxd_kernel.o 00:03:27.464 LIB libspdk_conf.a 00:03:27.464 CC lib/vmd/led.o 00:03:27.464 CC lib/env_dpdk/memory.o 00:03:27.464 CC lib/json/json_write.o 00:03:27.464 SO libspdk_rdma_provider.so.6.0 00:03:27.464 SO libspdk_conf.so.6.0 00:03:27.464 LIB libspdk_rdma_utils.a 00:03:27.464 SO libspdk_rdma_utils.so.1.0 00:03:27.464 SYMLINK libspdk_rdma_provider.so 00:03:27.464 SYMLINK libspdk_conf.so 00:03:27.464 CC lib/env_dpdk/pci.o 00:03:27.464 CC lib/env_dpdk/init.o 00:03:27.464 SYMLINK libspdk_rdma_utils.so 00:03:27.464 CC lib/env_dpdk/threads.o 00:03:27.464 CC lib/env_dpdk/pci_ioat.o 00:03:27.721 CC lib/env_dpdk/pci_virtio.o 00:03:27.721 CC lib/env_dpdk/pci_vmd.o 00:03:27.721 CC lib/env_dpdk/pci_idxd.o 00:03:27.721 LIB libspdk_json.a 00:03:27.721 CC lib/env_dpdk/pci_event.o 00:03:27.721 LIB libspdk_idxd.a 00:03:27.721 SO libspdk_json.so.6.0 00:03:27.721 SO libspdk_idxd.so.12.0 00:03:27.978 SYMLINK libspdk_json.so 00:03:27.978 LIB libspdk_vmd.a 00:03:27.978 CC lib/env_dpdk/sigbus_handler.o 00:03:27.978 SYMLINK libspdk_idxd.so 00:03:27.978 CC lib/env_dpdk/pci_dpdk.o 00:03:27.978 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:27.978 SO libspdk_vmd.so.6.0 00:03:27.978 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.978 SYMLINK libspdk_vmd.so 00:03:27.978 CC lib/jsonrpc/jsonrpc_server.o 00:03:27.978 CC lib/jsonrpc/jsonrpc_client.o 00:03:27.978 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:27.978 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:28.236 LIB libspdk_jsonrpc.a 00:03:28.236 SO libspdk_jsonrpc.so.6.0 00:03:28.493 SYMLINK libspdk_jsonrpc.so 00:03:28.493 LIB libspdk_env_dpdk.a 00:03:28.752 SO libspdk_env_dpdk.so.15.0 00:03:28.752 CC lib/rpc/rpc.o 00:03:28.752 SYMLINK libspdk_env_dpdk.so 00:03:29.009 LIB libspdk_rpc.a 00:03:29.009 SO libspdk_rpc.so.6.0 00:03:29.009 SYMLINK libspdk_rpc.so 00:03:29.268 CC lib/keyring/keyring.o 00:03:29.268 CC lib/notify/notify_rpc.o 00:03:29.268 CC lib/notify/notify.o 00:03:29.268 CC lib/keyring/keyring_rpc.o 00:03:29.268 CC lib/trace/trace_flags.o 00:03:29.268 CC lib/trace/trace.o 00:03:29.268 CC lib/trace/trace_rpc.o 00:03:29.527 LIB libspdk_notify.a 00:03:29.527 SO libspdk_notify.so.6.0 00:03:29.527 LIB libspdk_keyring.a 00:03:29.527 LIB libspdk_trace.a 00:03:29.527 SYMLINK libspdk_notify.so 00:03:29.527 SO libspdk_keyring.so.1.0 00:03:29.527 SO libspdk_trace.so.10.0 00:03:29.527 SYMLINK libspdk_keyring.so 00:03:29.785 SYMLINK libspdk_trace.so 00:03:30.044 CC lib/sock/sock.o 00:03:30.044 CC lib/sock/sock_rpc.o 00:03:30.044 CC lib/thread/thread.o 00:03:30.044 CC lib/thread/iobuf.o 00:03:30.302 LIB libspdk_sock.a 00:03:30.561 SO libspdk_sock.so.10.0 00:03:30.561 SYMLINK libspdk_sock.so 00:03:30.819 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:30.819 CC lib/nvme/nvme_ctrlr.o 00:03:30.819 CC lib/nvme/nvme_fabric.o 00:03:30.819 CC lib/nvme/nvme_ns_cmd.o 00:03:30.819 CC lib/nvme/nvme_ns.o 00:03:30.819 CC lib/nvme/nvme_pcie.o 00:03:30.819 CC lib/nvme/nvme_pcie_common.o 00:03:30.819 CC lib/nvme/nvme_qpair.o 00:03:30.819 CC lib/nvme/nvme.o 00:03:31.385 LIB libspdk_thread.a 00:03:31.644 SO libspdk_thread.so.10.1 00:03:31.644 CC lib/nvme/nvme_quirks.o 00:03:31.644 CC lib/nvme/nvme_transport.o 00:03:31.644 SYMLINK libspdk_thread.so 00:03:31.644 CC lib/nvme/nvme_discovery.o 00:03:31.644 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:31.644 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:31.644 CC lib/nvme/nvme_tcp.o 00:03:31.902 CC lib/nvme/nvme_opal.o 00:03:31.902 CC lib/nvme/nvme_io_msg.o 00:03:31.902 CC lib/nvme/nvme_poll_group.o 00:03:32.161 CC lib/nvme/nvme_zns.o 00:03:32.161 CC lib/nvme/nvme_stubs.o 00:03:32.161 CC lib/nvme/nvme_auth.o 00:03:32.419 CC lib/nvme/nvme_cuse.o 00:03:32.419 CC lib/nvme/nvme_rdma.o 00:03:32.676 CC lib/accel/accel.o 00:03:32.676 CC lib/blob/blobstore.o 00:03:32.676 CC lib/init/json_config.o 00:03:32.934 CC lib/init/subsystem.o 00:03:32.934 CC lib/init/subsystem_rpc.o 00:03:32.934 CC lib/accel/accel_rpc.o 00:03:32.934 CC lib/accel/accel_sw.o 00:03:32.934 CC lib/init/rpc.o 00:03:33.192 CC lib/blob/request.o 00:03:33.192 CC lib/blob/zeroes.o 00:03:33.192 CC lib/blob/blob_bs_dev.o 00:03:33.192 LIB libspdk_init.a 00:03:33.192 CC lib/virtio/virtio.o 00:03:33.192 CC lib/virtio/virtio_vhost_user.o 00:03:33.192 SO libspdk_init.so.5.0 00:03:33.449 CC lib/virtio/virtio_vfio_user.o 00:03:33.449 SYMLINK libspdk_init.so 00:03:33.449 CC lib/virtio/virtio_pci.o 00:03:33.449 CC lib/event/app.o 00:03:33.449 CC lib/event/log_rpc.o 00:03:33.449 CC lib/event/reactor.o 00:03:33.449 LIB libspdk_accel.a 00:03:33.449 CC lib/event/app_rpc.o 00:03:33.707 CC lib/event/scheduler_static.o 00:03:33.707 SO libspdk_accel.so.16.0 00:03:33.707 LIB libspdk_nvme.a 00:03:33.707 SYMLINK libspdk_accel.so 00:03:33.707 LIB libspdk_virtio.a 00:03:33.707 SO libspdk_virtio.so.7.0 00:03:33.707 SYMLINK libspdk_virtio.so 00:03:33.965 SO libspdk_nvme.so.13.1 00:03:33.965 CC lib/bdev/bdev_rpc.o 00:03:33.965 CC lib/bdev/bdev.o 00:03:33.965 CC lib/bdev/bdev_zone.o 00:03:33.965 CC lib/bdev/part.o 00:03:33.965 CC lib/bdev/scsi_nvme.o 00:03:33.965 LIB libspdk_event.a 00:03:33.965 SO libspdk_event.so.14.0 00:03:33.965 SYMLINK libspdk_event.so 00:03:34.223 SYMLINK libspdk_nvme.so 00:03:35.616 LIB libspdk_blob.a 00:03:35.616 SO libspdk_blob.so.11.0 00:03:35.874 SYMLINK libspdk_blob.so 00:03:36.132 CC lib/blobfs/blobfs.o 00:03:36.132 CC lib/blobfs/tree.o 00:03:36.132 CC lib/lvol/lvol.o 00:03:36.698 LIB libspdk_bdev.a 00:03:36.698 SO libspdk_bdev.so.16.0 00:03:36.698 SYMLINK libspdk_bdev.so 00:03:36.956 LIB libspdk_blobfs.a 00:03:36.956 SO libspdk_blobfs.so.10.0 00:03:36.956 CC lib/scsi/dev.o 00:03:36.956 CC lib/scsi/lun.o 00:03:36.956 CC lib/ftl/ftl_core.o 00:03:36.956 CC lib/nvmf/ctrlr.o 00:03:36.956 CC lib/scsi/port.o 00:03:36.956 CC lib/ublk/ublk.o 00:03:36.956 CC lib/scsi/scsi.o 00:03:36.956 CC lib/nbd/nbd.o 00:03:36.956 SYMLINK libspdk_blobfs.so 00:03:36.956 CC lib/ublk/ublk_rpc.o 00:03:37.214 LIB libspdk_lvol.a 00:03:37.214 SO libspdk_lvol.so.10.0 00:03:37.214 CC lib/ftl/ftl_init.o 00:03:37.214 SYMLINK libspdk_lvol.so 00:03:37.214 CC lib/ftl/ftl_layout.o 00:03:37.214 CC lib/ftl/ftl_debug.o 00:03:37.214 CC lib/ftl/ftl_io.o 00:03:37.214 CC lib/ftl/ftl_sb.o 00:03:37.214 CC lib/scsi/scsi_bdev.o 00:03:37.473 CC lib/ftl/ftl_l2p.o 00:03:37.473 CC lib/nbd/nbd_rpc.o 00:03:37.473 CC lib/ftl/ftl_l2p_flat.o 00:03:37.473 CC lib/ftl/ftl_nv_cache.o 00:03:37.473 CC lib/ftl/ftl_band.o 00:03:37.473 CC lib/ftl/ftl_band_ops.o 00:03:37.473 CC lib/ftl/ftl_writer.o 00:03:37.473 LIB libspdk_nbd.a 00:03:37.732 CC lib/ftl/ftl_rq.o 00:03:37.732 SO libspdk_nbd.so.7.0 00:03:37.732 LIB libspdk_ublk.a 00:03:37.732 CC lib/ftl/ftl_reloc.o 00:03:37.732 SO libspdk_ublk.so.3.0 00:03:37.732 SYMLINK libspdk_nbd.so 00:03:37.732 CC lib/ftl/ftl_l2p_cache.o 00:03:37.732 SYMLINK libspdk_ublk.so 00:03:37.732 CC lib/nvmf/ctrlr_discovery.o 00:03:37.732 CC lib/ftl/ftl_p2l.o 00:03:37.732 CC lib/scsi/scsi_pr.o 00:03:37.732 CC lib/scsi/scsi_rpc.o 00:03:37.732 CC lib/scsi/task.o 00:03:37.990 CC lib/ftl/mngt/ftl_mngt.o 00:03:37.990 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:37.990 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:37.990 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:38.249 LIB libspdk_scsi.a 00:03:38.249 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:38.249 SO libspdk_scsi.so.9.0 00:03:38.249 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:38.249 CC lib/nvmf/ctrlr_bdev.o 00:03:38.249 CC lib/nvmf/subsystem.o 00:03:38.249 CC lib/nvmf/nvmf.o 00:03:38.249 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:38.249 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:38.249 SYMLINK libspdk_scsi.so 00:03:38.249 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:38.508 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:38.508 CC lib/nvmf/nvmf_rpc.o 00:03:38.508 CC lib/nvmf/transport.o 00:03:38.508 CC lib/nvmf/tcp.o 00:03:38.508 CC lib/nvmf/stubs.o 00:03:38.767 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:38.767 CC lib/nvmf/mdns_server.o 00:03:38.767 CC lib/nvmf/rdma.o 00:03:38.767 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:38.767 CC lib/nvmf/auth.o 00:03:39.025 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:39.025 CC lib/ftl/utils/ftl_conf.o 00:03:39.025 CC lib/ftl/utils/ftl_md.o 00:03:39.283 CC lib/ftl/utils/ftl_mempool.o 00:03:39.283 CC lib/ftl/utils/ftl_bitmap.o 00:03:39.283 CC lib/ftl/utils/ftl_property.o 00:03:39.283 CC lib/iscsi/conn.o 00:03:39.283 CC lib/iscsi/init_grp.o 00:03:39.542 CC lib/vhost/vhost.o 00:03:39.542 CC lib/vhost/vhost_rpc.o 00:03:39.542 CC lib/iscsi/iscsi.o 00:03:39.542 CC lib/iscsi/md5.o 00:03:39.542 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:39.800 CC lib/iscsi/param.o 00:03:39.800 CC lib/iscsi/portal_grp.o 00:03:39.800 CC lib/iscsi/tgt_node.o 00:03:39.800 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:40.111 CC lib/iscsi/iscsi_subsystem.o 00:03:40.111 CC lib/iscsi/iscsi_rpc.o 00:03:40.111 CC lib/vhost/vhost_scsi.o 00:03:40.111 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:40.111 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:40.111 CC lib/vhost/vhost_blk.o 00:03:40.385 CC lib/vhost/rte_vhost_user.o 00:03:40.385 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:40.385 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:40.385 CC lib/iscsi/task.o 00:03:40.385 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:40.385 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:40.385 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:40.385 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:40.643 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:40.643 CC lib/ftl/base/ftl_base_dev.o 00:03:40.643 CC lib/ftl/base/ftl_base_bdev.o 00:03:40.643 LIB libspdk_nvmf.a 00:03:40.643 CC lib/ftl/ftl_trace.o 00:03:40.901 SO libspdk_nvmf.so.19.0 00:03:40.901 LIB libspdk_iscsi.a 00:03:40.901 SO libspdk_iscsi.so.8.0 00:03:40.901 LIB libspdk_ftl.a 00:03:40.901 SYMLINK libspdk_nvmf.so 00:03:41.160 SYMLINK libspdk_iscsi.so 00:03:41.160 SO libspdk_ftl.so.9.0 00:03:41.419 LIB libspdk_vhost.a 00:03:41.419 SO libspdk_vhost.so.8.0 00:03:41.678 SYMLINK libspdk_vhost.so 00:03:41.678 SYMLINK libspdk_ftl.so 00:03:41.936 CC module/env_dpdk/env_dpdk_rpc.o 00:03:42.195 CC module/sock/uring/uring.o 00:03:42.195 CC module/accel/dsa/accel_dsa.o 00:03:42.195 CC module/accel/ioat/accel_ioat.o 00:03:42.195 CC module/sock/posix/posix.o 00:03:42.195 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:42.195 CC module/keyring/file/keyring.o 00:03:42.195 CC module/blob/bdev/blob_bdev.o 00:03:42.195 CC module/accel/error/accel_error.o 00:03:42.195 CC module/accel/iaa/accel_iaa.o 00:03:42.195 LIB libspdk_env_dpdk_rpc.a 00:03:42.195 SO libspdk_env_dpdk_rpc.so.6.0 00:03:42.195 SYMLINK libspdk_env_dpdk_rpc.so 00:03:42.195 CC module/accel/iaa/accel_iaa_rpc.o 00:03:42.454 CC module/accel/ioat/accel_ioat_rpc.o 00:03:42.454 CC module/keyring/file/keyring_rpc.o 00:03:42.454 LIB libspdk_scheduler_dynamic.a 00:03:42.454 CC module/accel/error/accel_error_rpc.o 00:03:42.454 SO libspdk_scheduler_dynamic.so.4.0 00:03:42.454 CC module/accel/dsa/accel_dsa_rpc.o 00:03:42.454 LIB libspdk_blob_bdev.a 00:03:42.454 LIB libspdk_accel_iaa.a 00:03:42.454 SYMLINK libspdk_scheduler_dynamic.so 00:03:42.454 SO libspdk_blob_bdev.so.11.0 00:03:42.454 SO libspdk_accel_iaa.so.3.0 00:03:42.454 LIB libspdk_accel_ioat.a 00:03:42.454 LIB libspdk_keyring_file.a 00:03:42.454 LIB libspdk_accel_error.a 00:03:42.454 SO libspdk_accel_ioat.so.6.0 00:03:42.454 SO libspdk_keyring_file.so.1.0 00:03:42.454 SYMLINK libspdk_blob_bdev.so 00:03:42.454 SYMLINK libspdk_accel_iaa.so 00:03:42.454 SO libspdk_accel_error.so.2.0 00:03:42.454 SYMLINK libspdk_accel_ioat.so 00:03:42.712 SYMLINK libspdk_keyring_file.so 00:03:42.712 SYMLINK libspdk_accel_error.so 00:03:42.712 CC module/keyring/linux/keyring.o 00:03:42.712 CC module/keyring/linux/keyring_rpc.o 00:03:42.712 LIB libspdk_accel_dsa.a 00:03:42.712 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:42.712 SO libspdk_accel_dsa.so.5.0 00:03:42.712 SYMLINK libspdk_accel_dsa.so 00:03:42.712 CC module/scheduler/gscheduler/gscheduler.o 00:03:42.712 LIB libspdk_keyring_linux.a 00:03:42.712 SO libspdk_keyring_linux.so.1.0 00:03:42.712 LIB libspdk_scheduler_dpdk_governor.a 00:03:42.712 CC module/bdev/delay/vbdev_delay.o 00:03:42.970 CC module/bdev/error/vbdev_error.o 00:03:42.970 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:42.970 LIB libspdk_sock_posix.a 00:03:42.970 LIB libspdk_sock_uring.a 00:03:42.970 CC module/blobfs/bdev/blobfs_bdev.o 00:03:42.970 SO libspdk_sock_uring.so.5.0 00:03:42.970 LIB libspdk_scheduler_gscheduler.a 00:03:42.970 CC module/bdev/gpt/gpt.o 00:03:42.970 SYMLINK libspdk_keyring_linux.so 00:03:42.970 SO libspdk_sock_posix.so.6.0 00:03:42.970 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:42.970 SO libspdk_scheduler_gscheduler.so.4.0 00:03:42.970 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:42.970 CC module/bdev/lvol/vbdev_lvol.o 00:03:42.970 SYMLINK libspdk_sock_uring.so 00:03:42.970 CC module/bdev/gpt/vbdev_gpt.o 00:03:42.970 SYMLINK libspdk_sock_posix.so 00:03:42.970 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:42.970 SYMLINK libspdk_scheduler_gscheduler.so 00:03:42.970 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:43.228 CC module/bdev/error/vbdev_error_rpc.o 00:03:43.228 CC module/bdev/malloc/bdev_malloc.o 00:03:43.228 LIB libspdk_bdev_delay.a 00:03:43.228 CC module/bdev/null/bdev_null.o 00:03:43.228 SO libspdk_bdev_delay.so.6.0 00:03:43.228 LIB libspdk_bdev_gpt.a 00:03:43.228 LIB libspdk_blobfs_bdev.a 00:03:43.228 LIB libspdk_bdev_error.a 00:03:43.228 CC module/bdev/passthru/vbdev_passthru.o 00:03:43.228 CC module/bdev/nvme/bdev_nvme.o 00:03:43.228 SO libspdk_blobfs_bdev.so.6.0 00:03:43.228 SO libspdk_bdev_gpt.so.6.0 00:03:43.228 SO libspdk_bdev_error.so.6.0 00:03:43.228 SYMLINK libspdk_bdev_delay.so 00:03:43.486 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:43.486 CC module/bdev/nvme/nvme_rpc.o 00:03:43.486 SYMLINK libspdk_blobfs_bdev.so 00:03:43.486 SYMLINK libspdk_bdev_gpt.so 00:03:43.486 CC module/bdev/nvme/bdev_mdns_client.o 00:03:43.486 CC module/bdev/nvme/vbdev_opal.o 00:03:43.486 SYMLINK libspdk_bdev_error.so 00:03:43.486 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:43.486 LIB libspdk_bdev_lvol.a 00:03:43.486 CC module/bdev/null/bdev_null_rpc.o 00:03:43.486 SO libspdk_bdev_lvol.so.6.0 00:03:43.487 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:43.487 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:43.487 SYMLINK libspdk_bdev_lvol.so 00:03:43.745 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:43.745 LIB libspdk_bdev_null.a 00:03:43.745 LIB libspdk_bdev_malloc.a 00:03:43.745 SO libspdk_bdev_null.so.6.0 00:03:43.745 SO libspdk_bdev_malloc.so.6.0 00:03:43.745 CC module/bdev/raid/bdev_raid.o 00:03:43.745 CC module/bdev/split/vbdev_split.o 00:03:43.745 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:43.745 SYMLINK libspdk_bdev_null.so 00:03:43.745 SYMLINK libspdk_bdev_malloc.so 00:03:43.745 LIB libspdk_bdev_passthru.a 00:03:43.745 CC module/bdev/uring/bdev_uring.o 00:03:43.745 SO libspdk_bdev_passthru.so.6.0 00:03:44.003 CC module/bdev/aio/bdev_aio.o 00:03:44.004 SYMLINK libspdk_bdev_passthru.so 00:03:44.004 CC module/bdev/aio/bdev_aio_rpc.o 00:03:44.004 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:44.004 CC module/bdev/iscsi/bdev_iscsi.o 00:03:44.004 CC module/bdev/ftl/bdev_ftl.o 00:03:44.004 CC module/bdev/split/vbdev_split_rpc.o 00:03:44.262 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:44.262 LIB libspdk_bdev_zone_block.a 00:03:44.262 CC module/bdev/raid/bdev_raid_rpc.o 00:03:44.262 SO libspdk_bdev_zone_block.so.6.0 00:03:44.262 LIB libspdk_bdev_split.a 00:03:44.262 CC module/bdev/uring/bdev_uring_rpc.o 00:03:44.262 SO libspdk_bdev_split.so.6.0 00:03:44.262 LIB libspdk_bdev_aio.a 00:03:44.262 SYMLINK libspdk_bdev_zone_block.so 00:03:44.262 SO libspdk_bdev_aio.so.6.0 00:03:44.262 SYMLINK libspdk_bdev_split.so 00:03:44.262 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:44.262 CC module/bdev/raid/bdev_raid_sb.o 00:03:44.262 SYMLINK libspdk_bdev_aio.so 00:03:44.262 CC module/bdev/raid/raid0.o 00:03:44.262 CC module/bdev/raid/raid1.o 00:03:44.521 LIB libspdk_bdev_iscsi.a 00:03:44.521 LIB libspdk_bdev_uring.a 00:03:44.521 CC module/bdev/raid/concat.o 00:03:44.521 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:44.521 SO libspdk_bdev_iscsi.so.6.0 00:03:44.521 SO libspdk_bdev_uring.so.6.0 00:03:44.521 LIB libspdk_bdev_ftl.a 00:03:44.521 SYMLINK libspdk_bdev_iscsi.so 00:03:44.521 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:44.521 SYMLINK libspdk_bdev_uring.so 00:03:44.521 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:44.521 SO libspdk_bdev_ftl.so.6.0 00:03:44.521 SYMLINK libspdk_bdev_ftl.so 00:03:44.781 LIB libspdk_bdev_raid.a 00:03:44.781 SO libspdk_bdev_raid.so.6.0 00:03:45.042 SYMLINK libspdk_bdev_raid.so 00:03:45.042 LIB libspdk_bdev_virtio.a 00:03:45.042 SO libspdk_bdev_virtio.so.6.0 00:03:45.042 SYMLINK libspdk_bdev_virtio.so 00:03:45.609 LIB libspdk_bdev_nvme.a 00:03:45.609 SO libspdk_bdev_nvme.so.7.0 00:03:45.609 SYMLINK libspdk_bdev_nvme.so 00:03:46.177 CC module/event/subsystems/vmd/vmd.o 00:03:46.177 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:46.177 CC module/event/subsystems/keyring/keyring.o 00:03:46.177 CC module/event/subsystems/sock/sock.o 00:03:46.177 CC module/event/subsystems/iobuf/iobuf.o 00:03:46.177 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:46.177 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:46.177 CC module/event/subsystems/scheduler/scheduler.o 00:03:46.435 LIB libspdk_event_scheduler.a 00:03:46.435 LIB libspdk_event_keyring.a 00:03:46.435 LIB libspdk_event_vhost_blk.a 00:03:46.435 LIB libspdk_event_sock.a 00:03:46.435 LIB libspdk_event_vmd.a 00:03:46.435 SO libspdk_event_scheduler.so.4.0 00:03:46.435 LIB libspdk_event_iobuf.a 00:03:46.435 SO libspdk_event_keyring.so.1.0 00:03:46.435 SO libspdk_event_vhost_blk.so.3.0 00:03:46.435 SO libspdk_event_sock.so.5.0 00:03:46.435 SO libspdk_event_vmd.so.6.0 00:03:46.435 SO libspdk_event_iobuf.so.3.0 00:03:46.435 SYMLINK libspdk_event_scheduler.so 00:03:46.435 SYMLINK libspdk_event_keyring.so 00:03:46.435 SYMLINK libspdk_event_sock.so 00:03:46.435 SYMLINK libspdk_event_vhost_blk.so 00:03:46.435 SYMLINK libspdk_event_iobuf.so 00:03:46.435 SYMLINK libspdk_event_vmd.so 00:03:46.694 CC module/event/subsystems/accel/accel.o 00:03:46.952 LIB libspdk_event_accel.a 00:03:46.952 SO libspdk_event_accel.so.6.0 00:03:46.952 SYMLINK libspdk_event_accel.so 00:03:47.210 CC module/event/subsystems/bdev/bdev.o 00:03:47.469 LIB libspdk_event_bdev.a 00:03:47.469 SO libspdk_event_bdev.so.6.0 00:03:47.727 SYMLINK libspdk_event_bdev.so 00:03:47.727 CC module/event/subsystems/ublk/ublk.o 00:03:47.727 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:47.727 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:47.727 CC module/event/subsystems/scsi/scsi.o 00:03:47.727 CC module/event/subsystems/nbd/nbd.o 00:03:47.986 LIB libspdk_event_ublk.a 00:03:47.986 LIB libspdk_event_nbd.a 00:03:47.986 LIB libspdk_event_scsi.a 00:03:47.986 SO libspdk_event_ublk.so.3.0 00:03:47.986 SO libspdk_event_nbd.so.6.0 00:03:47.986 SO libspdk_event_scsi.so.6.0 00:03:47.986 SYMLINK libspdk_event_ublk.so 00:03:47.986 LIB libspdk_event_nvmf.a 00:03:47.986 SYMLINK libspdk_event_nbd.so 00:03:48.244 SYMLINK libspdk_event_scsi.so 00:03:48.244 SO libspdk_event_nvmf.so.6.0 00:03:48.244 SYMLINK libspdk_event_nvmf.so 00:03:48.244 CC module/event/subsystems/iscsi/iscsi.o 00:03:48.503 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:48.503 LIB libspdk_event_vhost_scsi.a 00:03:48.503 LIB libspdk_event_iscsi.a 00:03:48.503 SO libspdk_event_vhost_scsi.so.3.0 00:03:48.503 SO libspdk_event_iscsi.so.6.0 00:03:48.503 SYMLINK libspdk_event_vhost_scsi.so 00:03:48.762 SYMLINK libspdk_event_iscsi.so 00:03:48.762 SO libspdk.so.6.0 00:03:48.762 SYMLINK libspdk.so 00:03:49.021 CXX app/trace/trace.o 00:03:49.021 CC app/spdk_lspci/spdk_lspci.o 00:03:49.021 CC app/trace_record/trace_record.o 00:03:49.021 CC app/iscsi_tgt/iscsi_tgt.o 00:03:49.021 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:49.021 CC app/nvmf_tgt/nvmf_main.o 00:03:49.292 CC examples/util/zipf/zipf.o 00:03:49.292 CC app/spdk_tgt/spdk_tgt.o 00:03:49.292 CC examples/ioat/perf/perf.o 00:03:49.292 CC test/thread/poller_perf/poller_perf.o 00:03:49.292 LINK spdk_lspci 00:03:49.292 LINK spdk_trace_record 00:03:49.292 LINK interrupt_tgt 00:03:49.292 LINK zipf 00:03:49.292 LINK iscsi_tgt 00:03:49.292 LINK poller_perf 00:03:49.292 LINK nvmf_tgt 00:03:49.292 LINK spdk_tgt 00:03:49.575 LINK ioat_perf 00:03:49.575 CC app/spdk_nvme_perf/perf.o 00:03:49.575 LINK spdk_trace 00:03:49.575 CC app/spdk_nvme_identify/identify.o 00:03:49.575 CC app/spdk_nvme_discover/discovery_aer.o 00:03:49.575 CC app/spdk_top/spdk_top.o 00:03:49.575 CC examples/ioat/verify/verify.o 00:03:49.834 CC app/spdk_dd/spdk_dd.o 00:03:49.834 CC test/dma/test_dma/test_dma.o 00:03:49.834 CC examples/thread/thread/thread_ex.o 00:03:49.834 CC app/fio/nvme/fio_plugin.o 00:03:49.834 LINK spdk_nvme_discover 00:03:49.834 CC examples/sock/hello_world/hello_sock.o 00:03:49.834 LINK verify 00:03:50.092 LINK thread 00:03:50.092 LINK hello_sock 00:03:50.092 LINK test_dma 00:03:50.092 CC app/vhost/vhost.o 00:03:50.092 LINK spdk_dd 00:03:50.351 CC examples/vmd/lsvmd/lsvmd.o 00:03:50.351 CC examples/vmd/led/led.o 00:03:50.351 LINK spdk_nvme_identify 00:03:50.351 LINK spdk_nvme 00:03:50.351 LINK spdk_nvme_perf 00:03:50.351 CC app/fio/bdev/fio_plugin.o 00:03:50.351 LINK lsvmd 00:03:50.351 LINK vhost 00:03:50.610 LINK led 00:03:50.610 LINK spdk_top 00:03:50.610 CC test/app/bdev_svc/bdev_svc.o 00:03:50.610 CC examples/idxd/perf/perf.o 00:03:50.610 CC test/app/histogram_perf/histogram_perf.o 00:03:50.610 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:50.610 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:50.868 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:50.868 CC examples/accel/perf/accel_perf.o 00:03:50.868 CC test/app/jsoncat/jsoncat.o 00:03:50.868 LINK bdev_svc 00:03:50.868 LINK histogram_perf 00:03:50.868 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:50.868 CC examples/blob/hello_world/hello_blob.o 00:03:50.868 LINK spdk_bdev 00:03:50.868 LINK jsoncat 00:03:50.868 LINK idxd_perf 00:03:51.127 CC examples/blob/cli/blobcli.o 00:03:51.127 LINK nvme_fuzz 00:03:51.127 CC test/app/stub/stub.o 00:03:51.127 LINK hello_blob 00:03:51.127 TEST_HEADER include/spdk/accel.h 00:03:51.127 TEST_HEADER include/spdk/accel_module.h 00:03:51.127 TEST_HEADER include/spdk/assert.h 00:03:51.127 TEST_HEADER include/spdk/barrier.h 00:03:51.127 TEST_HEADER include/spdk/base64.h 00:03:51.127 TEST_HEADER include/spdk/bdev.h 00:03:51.127 TEST_HEADER include/spdk/bdev_module.h 00:03:51.127 TEST_HEADER include/spdk/bdev_zone.h 00:03:51.127 TEST_HEADER include/spdk/bit_array.h 00:03:51.127 TEST_HEADER include/spdk/bit_pool.h 00:03:51.127 TEST_HEADER include/spdk/blob_bdev.h 00:03:51.127 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:51.127 TEST_HEADER include/spdk/blobfs.h 00:03:51.127 TEST_HEADER include/spdk/blob.h 00:03:51.127 TEST_HEADER include/spdk/conf.h 00:03:51.127 TEST_HEADER include/spdk/config.h 00:03:51.127 CC examples/nvme/hello_world/hello_world.o 00:03:51.127 TEST_HEADER include/spdk/cpuset.h 00:03:51.127 TEST_HEADER include/spdk/crc16.h 00:03:51.127 TEST_HEADER include/spdk/crc32.h 00:03:51.127 TEST_HEADER include/spdk/crc64.h 00:03:51.127 TEST_HEADER include/spdk/dif.h 00:03:51.127 LINK accel_perf 00:03:51.127 TEST_HEADER include/spdk/dma.h 00:03:51.127 TEST_HEADER include/spdk/endian.h 00:03:51.127 TEST_HEADER include/spdk/env_dpdk.h 00:03:51.127 TEST_HEADER include/spdk/env.h 00:03:51.127 TEST_HEADER include/spdk/event.h 00:03:51.127 TEST_HEADER include/spdk/fd_group.h 00:03:51.127 TEST_HEADER include/spdk/fd.h 00:03:51.127 TEST_HEADER include/spdk/file.h 00:03:51.127 TEST_HEADER include/spdk/ftl.h 00:03:51.127 TEST_HEADER include/spdk/gpt_spec.h 00:03:51.127 TEST_HEADER include/spdk/hexlify.h 00:03:51.127 TEST_HEADER include/spdk/histogram_data.h 00:03:51.127 TEST_HEADER include/spdk/idxd.h 00:03:51.127 TEST_HEADER include/spdk/idxd_spec.h 00:03:51.127 TEST_HEADER include/spdk/init.h 00:03:51.127 TEST_HEADER include/spdk/ioat.h 00:03:51.386 TEST_HEADER include/spdk/ioat_spec.h 00:03:51.386 TEST_HEADER include/spdk/iscsi_spec.h 00:03:51.386 LINK stub 00:03:51.386 TEST_HEADER include/spdk/json.h 00:03:51.386 TEST_HEADER include/spdk/jsonrpc.h 00:03:51.386 TEST_HEADER include/spdk/keyring.h 00:03:51.386 TEST_HEADER include/spdk/keyring_module.h 00:03:51.386 TEST_HEADER include/spdk/likely.h 00:03:51.386 LINK vhost_fuzz 00:03:51.386 TEST_HEADER include/spdk/log.h 00:03:51.386 TEST_HEADER include/spdk/lvol.h 00:03:51.386 TEST_HEADER include/spdk/memory.h 00:03:51.386 TEST_HEADER include/spdk/mmio.h 00:03:51.386 TEST_HEADER include/spdk/nbd.h 00:03:51.386 TEST_HEADER include/spdk/net.h 00:03:51.386 TEST_HEADER include/spdk/notify.h 00:03:51.386 TEST_HEADER include/spdk/nvme.h 00:03:51.386 TEST_HEADER include/spdk/nvme_intel.h 00:03:51.386 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:51.386 CC test/blobfs/mkfs/mkfs.o 00:03:51.386 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:51.386 CC examples/nvme/reconnect/reconnect.o 00:03:51.386 TEST_HEADER include/spdk/nvme_spec.h 00:03:51.386 TEST_HEADER include/spdk/nvme_zns.h 00:03:51.386 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:51.386 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:51.386 TEST_HEADER include/spdk/nvmf.h 00:03:51.386 TEST_HEADER include/spdk/nvmf_spec.h 00:03:51.386 TEST_HEADER include/spdk/nvmf_transport.h 00:03:51.386 TEST_HEADER include/spdk/opal.h 00:03:51.386 TEST_HEADER include/spdk/opal_spec.h 00:03:51.386 TEST_HEADER include/spdk/pci_ids.h 00:03:51.386 TEST_HEADER include/spdk/pipe.h 00:03:51.386 TEST_HEADER include/spdk/queue.h 00:03:51.386 TEST_HEADER include/spdk/reduce.h 00:03:51.386 TEST_HEADER include/spdk/rpc.h 00:03:51.386 TEST_HEADER include/spdk/scheduler.h 00:03:51.386 TEST_HEADER include/spdk/scsi.h 00:03:51.386 TEST_HEADER include/spdk/scsi_spec.h 00:03:51.386 TEST_HEADER include/spdk/sock.h 00:03:51.386 TEST_HEADER include/spdk/stdinc.h 00:03:51.386 TEST_HEADER include/spdk/string.h 00:03:51.386 TEST_HEADER include/spdk/thread.h 00:03:51.386 TEST_HEADER include/spdk/trace.h 00:03:51.386 TEST_HEADER include/spdk/trace_parser.h 00:03:51.386 TEST_HEADER include/spdk/tree.h 00:03:51.386 TEST_HEADER include/spdk/ublk.h 00:03:51.386 TEST_HEADER include/spdk/util.h 00:03:51.386 TEST_HEADER include/spdk/uuid.h 00:03:51.386 TEST_HEADER include/spdk/version.h 00:03:51.386 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:51.386 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:51.386 TEST_HEADER include/spdk/vhost.h 00:03:51.386 TEST_HEADER include/spdk/vmd.h 00:03:51.386 TEST_HEADER include/spdk/xor.h 00:03:51.386 TEST_HEADER include/spdk/zipf.h 00:03:51.386 CXX test/cpp_headers/accel.o 00:03:51.386 LINK hello_world 00:03:51.645 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:51.645 LINK mkfs 00:03:51.645 CXX test/cpp_headers/accel_module.o 00:03:51.645 CC examples/nvme/arbitration/arbitration.o 00:03:51.645 LINK blobcli 00:03:51.645 LINK reconnect 00:03:51.645 CC examples/bdev/hello_world/hello_bdev.o 00:03:51.645 CC test/env/mem_callbacks/mem_callbacks.o 00:03:51.645 CC examples/nvme/hotplug/hotplug.o 00:03:51.903 CXX test/cpp_headers/assert.o 00:03:51.903 CXX test/cpp_headers/barrier.o 00:03:51.903 CC test/env/vtophys/vtophys.o 00:03:51.903 LINK hello_bdev 00:03:51.903 CC test/event/event_perf/event_perf.o 00:03:51.903 LINK arbitration 00:03:51.903 LINK hotplug 00:03:51.903 LINK nvme_manage 00:03:51.903 CXX test/cpp_headers/base64.o 00:03:52.161 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:52.161 LINK vtophys 00:03:52.161 LINK event_perf 00:03:52.161 CXX test/cpp_headers/bdev.o 00:03:52.161 CXX test/cpp_headers/bdev_module.o 00:03:52.161 LINK env_dpdk_post_init 00:03:52.161 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:52.161 CC examples/nvme/abort/abort.o 00:03:52.419 CC examples/bdev/bdevperf/bdevperf.o 00:03:52.419 LINK iscsi_fuzz 00:03:52.419 LINK mem_callbacks 00:03:52.419 CC test/env/memory/memory_ut.o 00:03:52.419 CC test/event/reactor/reactor.o 00:03:52.419 CXX test/cpp_headers/bdev_zone.o 00:03:52.419 CC test/event/reactor_perf/reactor_perf.o 00:03:52.419 CC test/env/pci/pci_ut.o 00:03:52.419 LINK cmb_copy 00:03:52.419 LINK reactor 00:03:52.678 LINK reactor_perf 00:03:52.678 CXX test/cpp_headers/bit_array.o 00:03:52.678 LINK abort 00:03:52.678 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:52.678 CC test/lvol/esnap/esnap.o 00:03:52.678 CC test/event/app_repeat/app_repeat.o 00:03:52.678 CXX test/cpp_headers/bit_pool.o 00:03:52.678 CC test/event/scheduler/scheduler.o 00:03:52.937 CXX test/cpp_headers/blob_bdev.o 00:03:52.937 LINK pmr_persistence 00:03:52.937 LINK pci_ut 00:03:52.937 LINK app_repeat 00:03:52.937 CC test/nvme/aer/aer.o 00:03:52.937 CXX test/cpp_headers/blobfs_bdev.o 00:03:52.937 CC test/rpc_client/rpc_client_test.o 00:03:52.937 LINK scheduler 00:03:52.937 LINK bdevperf 00:03:53.195 CXX test/cpp_headers/blobfs.o 00:03:53.195 CXX test/cpp_headers/blob.o 00:03:53.195 LINK aer 00:03:53.195 LINK rpc_client_test 00:03:53.195 CXX test/cpp_headers/conf.o 00:03:53.195 CC test/accel/dif/dif.o 00:03:53.195 CXX test/cpp_headers/config.o 00:03:53.195 CXX test/cpp_headers/cpuset.o 00:03:53.453 CC test/nvme/reset/reset.o 00:03:53.453 CXX test/cpp_headers/crc16.o 00:03:53.453 CC test/nvme/sgl/sgl.o 00:03:53.453 CC test/nvme/e2edp/nvme_dp.o 00:03:53.453 LINK memory_ut 00:03:53.453 CC test/nvme/overhead/overhead.o 00:03:53.453 CC examples/nvmf/nvmf/nvmf.o 00:03:53.453 CXX test/cpp_headers/crc32.o 00:03:53.453 CC test/nvme/err_injection/err_injection.o 00:03:53.711 LINK reset 00:03:53.711 LINK dif 00:03:53.711 LINK sgl 00:03:53.711 CXX test/cpp_headers/crc64.o 00:03:53.711 CXX test/cpp_headers/dif.o 00:03:53.711 LINK nvme_dp 00:03:53.711 LINK overhead 00:03:53.711 LINK err_injection 00:03:53.969 LINK nvmf 00:03:53.969 CC test/nvme/startup/startup.o 00:03:53.969 CXX test/cpp_headers/dma.o 00:03:53.969 CC test/nvme/reserve/reserve.o 00:03:53.969 CC test/nvme/simple_copy/simple_copy.o 00:03:53.969 CC test/nvme/connect_stress/connect_stress.o 00:03:53.969 CC test/nvme/boot_partition/boot_partition.o 00:03:53.969 CC test/nvme/compliance/nvme_compliance.o 00:03:53.969 LINK startup 00:03:53.969 CXX test/cpp_headers/endian.o 00:03:54.227 LINK connect_stress 00:03:54.227 LINK reserve 00:03:54.227 CC test/bdev/bdevio/bdevio.o 00:03:54.227 CC test/nvme/fused_ordering/fused_ordering.o 00:03:54.227 LINK simple_copy 00:03:54.227 LINK boot_partition 00:03:54.227 CXX test/cpp_headers/env_dpdk.o 00:03:54.227 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:54.227 CXX test/cpp_headers/env.o 00:03:54.485 LINK fused_ordering 00:03:54.485 CC test/nvme/fdp/fdp.o 00:03:54.485 LINK nvme_compliance 00:03:54.485 CXX test/cpp_headers/event.o 00:03:54.485 CXX test/cpp_headers/fd_group.o 00:03:54.485 CC test/nvme/cuse/cuse.o 00:03:54.485 CXX test/cpp_headers/fd.o 00:03:54.485 LINK bdevio 00:03:54.485 CXX test/cpp_headers/file.o 00:03:54.485 LINK doorbell_aers 00:03:54.485 CXX test/cpp_headers/ftl.o 00:03:54.485 CXX test/cpp_headers/gpt_spec.o 00:03:54.485 CXX test/cpp_headers/hexlify.o 00:03:54.744 CXX test/cpp_headers/histogram_data.o 00:03:54.744 CXX test/cpp_headers/idxd.o 00:03:54.744 CXX test/cpp_headers/idxd_spec.o 00:03:54.744 LINK fdp 00:03:54.744 CXX test/cpp_headers/init.o 00:03:54.744 CXX test/cpp_headers/ioat.o 00:03:54.744 CXX test/cpp_headers/ioat_spec.o 00:03:54.744 CXX test/cpp_headers/iscsi_spec.o 00:03:54.744 CXX test/cpp_headers/json.o 00:03:55.001 CXX test/cpp_headers/jsonrpc.o 00:03:55.001 CXX test/cpp_headers/keyring.o 00:03:55.002 CXX test/cpp_headers/keyring_module.o 00:03:55.002 CXX test/cpp_headers/likely.o 00:03:55.002 CXX test/cpp_headers/log.o 00:03:55.002 CXX test/cpp_headers/lvol.o 00:03:55.002 CXX test/cpp_headers/memory.o 00:03:55.002 CXX test/cpp_headers/mmio.o 00:03:55.002 CXX test/cpp_headers/nbd.o 00:03:55.002 CXX test/cpp_headers/net.o 00:03:55.002 CXX test/cpp_headers/nvme.o 00:03:55.002 CXX test/cpp_headers/notify.o 00:03:55.002 CXX test/cpp_headers/nvme_intel.o 00:03:55.260 CXX test/cpp_headers/nvme_ocssd.o 00:03:55.260 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:55.260 CXX test/cpp_headers/nvme_spec.o 00:03:55.260 CXX test/cpp_headers/nvme_zns.o 00:03:55.260 CXX test/cpp_headers/nvmf_cmd.o 00:03:55.260 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:55.260 CXX test/cpp_headers/nvmf.o 00:03:55.260 CXX test/cpp_headers/nvmf_spec.o 00:03:55.260 CXX test/cpp_headers/nvmf_transport.o 00:03:55.520 CXX test/cpp_headers/opal.o 00:03:55.520 CXX test/cpp_headers/opal_spec.o 00:03:55.520 CXX test/cpp_headers/pci_ids.o 00:03:55.520 CXX test/cpp_headers/pipe.o 00:03:55.520 CXX test/cpp_headers/queue.o 00:03:55.520 CXX test/cpp_headers/reduce.o 00:03:55.520 CXX test/cpp_headers/rpc.o 00:03:55.520 CXX test/cpp_headers/scheduler.o 00:03:55.520 CXX test/cpp_headers/scsi.o 00:03:55.520 CXX test/cpp_headers/scsi_spec.o 00:03:55.520 CXX test/cpp_headers/sock.o 00:03:55.520 CXX test/cpp_headers/stdinc.o 00:03:55.520 CXX test/cpp_headers/string.o 00:03:55.780 CXX test/cpp_headers/thread.o 00:03:55.780 CXX test/cpp_headers/trace.o 00:03:55.780 CXX test/cpp_headers/trace_parser.o 00:03:55.780 CXX test/cpp_headers/tree.o 00:03:55.780 CXX test/cpp_headers/ublk.o 00:03:55.780 CXX test/cpp_headers/util.o 00:03:55.780 CXX test/cpp_headers/uuid.o 00:03:55.780 CXX test/cpp_headers/version.o 00:03:55.780 CXX test/cpp_headers/vfio_user_pci.o 00:03:55.780 CXX test/cpp_headers/vfio_user_spec.o 00:03:55.780 CXX test/cpp_headers/vhost.o 00:03:55.780 CXX test/cpp_headers/vmd.o 00:03:55.780 LINK cuse 00:03:55.780 CXX test/cpp_headers/xor.o 00:03:56.038 CXX test/cpp_headers/zipf.o 00:03:57.413 LINK esnap 00:03:57.979 00:03:57.979 real 1m3.836s 00:03:57.979 user 6m31.212s 00:03:57.979 sys 1m34.406s 00:03:57.979 19:42:26 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:57.979 19:42:26 make -- common/autotest_common.sh@10 -- $ set +x 00:03:57.979 ************************************ 00:03:57.979 END TEST make 00:03:57.979 ************************************ 00:03:57.979 19:42:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:57.979 19:42:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:57.979 19:42:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:57.979 19:42:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.979 19:42:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:57.979 19:42:26 -- pm/common@44 -- $ pid=5129 00:03:57.979 19:42:26 -- pm/common@50 -- $ kill -TERM 5129 00:03:57.979 19:42:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.979 19:42:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:57.979 19:42:26 -- pm/common@44 -- $ pid=5130 00:03:57.979 19:42:26 -- pm/common@50 -- $ kill -TERM 5130 00:03:57.979 19:42:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:57.979 19:42:26 -- nvmf/common.sh@7 -- # uname -s 00:03:57.979 19:42:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.979 19:42:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.979 19:42:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.979 19:42:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.979 19:42:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.979 19:42:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.979 19:42:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.979 19:42:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.979 19:42:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.979 19:42:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.979 19:42:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:03:57.979 19:42:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:03:57.979 19:42:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.979 19:42:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.979 19:42:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:57.979 19:42:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.979 19:42:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:57.979 19:42:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.979 19:42:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.979 19:42:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.979 19:42:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.979 19:42:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.979 19:42:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.979 19:42:26 -- paths/export.sh@5 -- # export PATH 00:03:57.979 19:42:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.979 19:42:26 -- nvmf/common.sh@47 -- # : 0 00:03:57.979 19:42:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:57.979 19:42:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:57.980 19:42:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.980 19:42:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.980 19:42:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.980 19:42:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:57.980 19:42:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:57.980 19:42:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:57.980 19:42:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.980 19:42:26 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.980 19:42:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.980 19:42:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:57.980 19:42:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:57.980 19:42:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.980 19:42:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:57.980 19:42:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:58.237 19:42:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:58.238 19:42:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:58.238 19:42:26 -- spdk/autotest.sh@48 -- # udevadm_pid=52779 00:03:58.238 19:42:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:58.238 19:42:26 -- pm/common@17 -- # local monitor 00:03:58.238 19:42:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:58.238 19:42:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:58.238 19:42:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:58.238 19:42:26 -- pm/common@25 -- # sleep 1 00:03:58.238 19:42:26 -- pm/common@21 -- # date +%s 00:03:58.238 19:42:26 -- pm/common@21 -- # date +%s 00:03:58.238 19:42:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721850146 00:03:58.238 19:42:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721850146 00:03:58.238 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721850146_collect-cpu-load.pm.log 00:03:58.238 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721850146_collect-vmstat.pm.log 00:03:59.172 19:42:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:59.172 19:42:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:59.172 19:42:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.172 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:03:59.172 19:42:27 -- spdk/autotest.sh@59 -- # create_test_list 00:03:59.172 19:42:27 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:59.172 19:42:27 -- common/autotest_common.sh@10 -- # set +x 00:03:59.172 19:42:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:59.172 19:42:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:59.172 19:42:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:59.172 19:42:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:59.172 19:42:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:59.172 19:42:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:59.172 19:42:27 -- common/autotest_common.sh@1455 -- # uname 00:03:59.172 19:42:27 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:59.172 19:42:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:59.172 19:42:27 -- common/autotest_common.sh@1475 -- # uname 00:03:59.172 19:42:27 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:59.172 19:42:27 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:59.172 19:42:27 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:59.172 19:42:27 -- spdk/autotest.sh@72 -- # hash lcov 00:03:59.172 19:42:27 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:59.172 19:42:27 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:59.172 --rc lcov_branch_coverage=1 00:03:59.172 --rc lcov_function_coverage=1 00:03:59.172 --rc genhtml_branch_coverage=1 00:03:59.172 --rc genhtml_function_coverage=1 00:03:59.172 --rc genhtml_legend=1 00:03:59.172 --rc geninfo_all_blocks=1 00:03:59.172 ' 00:03:59.172 19:42:27 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:59.172 --rc lcov_branch_coverage=1 00:03:59.172 --rc lcov_function_coverage=1 00:03:59.173 --rc genhtml_branch_coverage=1 00:03:59.173 --rc genhtml_function_coverage=1 00:03:59.173 --rc genhtml_legend=1 00:03:59.173 --rc geninfo_all_blocks=1 00:03:59.173 ' 00:03:59.173 19:42:27 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:59.173 --rc lcov_branch_coverage=1 00:03:59.173 --rc lcov_function_coverage=1 00:03:59.173 --rc genhtml_branch_coverage=1 00:03:59.173 --rc genhtml_function_coverage=1 00:03:59.173 --rc genhtml_legend=1 00:03:59.173 --rc geninfo_all_blocks=1 00:03:59.173 --no-external' 00:03:59.173 19:42:27 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:59.173 --rc lcov_branch_coverage=1 00:03:59.173 --rc lcov_function_coverage=1 00:03:59.173 --rc genhtml_branch_coverage=1 00:03:59.173 --rc genhtml_function_coverage=1 00:03:59.173 --rc genhtml_legend=1 00:03:59.173 --rc geninfo_all_blocks=1 00:03:59.173 --no-external' 00:03:59.173 19:42:27 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:59.173 lcov: LCOV version 1.14 00:03:59.173 19:42:27 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:17.255 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:17.255 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:27.230 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:27.230 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:27.231 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:27.231 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:27.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:27.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:27.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:27.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:31.673 19:42:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:31.673 19:42:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.673 19:42:59 -- common/autotest_common.sh@10 -- # set +x 00:04:31.673 19:42:59 -- spdk/autotest.sh@91 -- # rm -f 00:04:31.673 19:42:59 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.931 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:31.931 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:31.931 19:43:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:31.931 19:43:00 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:31.931 19:43:00 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:31.931 19:43:00 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:31.931 19:43:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:31.931 19:43:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:31.931 19:43:00 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:31.931 19:43:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:31.931 19:43:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:31.931 19:43:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:31.931 19:43:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:31.931 19:43:00 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:31.931 19:43:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:31.931 19:43:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:31.931 19:43:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:31.931 19:43:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:31.931 19:43:00 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:31.932 19:43:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:31.932 19:43:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:31.932 19:43:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:31.932 19:43:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:31.932 19:43:00 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:31.932 19:43:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:31.932 19:43:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:31.932 19:43:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:31.932 19:43:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:31.932 19:43:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:31.932 19:43:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:31.932 19:43:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:31.932 19:43:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:31.932 No valid GPT data, bailing 00:04:31.932 19:43:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:31.932 19:43:00 -- scripts/common.sh@391 -- # pt= 00:04:31.932 19:43:00 -- scripts/common.sh@392 -- # return 1 00:04:31.932 19:43:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:31.932 1+0 records in 00:04:31.932 1+0 records out 00:04:31.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457213 s, 229 MB/s 00:04:31.932 19:43:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:31.932 19:43:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:31.932 19:43:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:31.932 19:43:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:31.932 19:43:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:31.932 No valid GPT data, bailing 00:04:31.932 19:43:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:31.932 19:43:00 -- scripts/common.sh@391 -- # pt= 00:04:31.932 19:43:00 -- scripts/common.sh@392 -- # return 1 00:04:31.932 19:43:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:31.932 1+0 records in 00:04:31.932 1+0 records out 00:04:31.932 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040737 s, 257 MB/s 00:04:31.932 19:43:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:31.932 19:43:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:31.932 19:43:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:04:31.932 19:43:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:04:31.932 19:43:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:32.190 No valid GPT data, bailing 00:04:32.190 19:43:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:32.190 19:43:00 -- scripts/common.sh@391 -- # pt= 00:04:32.190 19:43:00 -- scripts/common.sh@392 -- # return 1 00:04:32.190 19:43:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:32.190 1+0 records in 00:04:32.190 1+0 records out 00:04:32.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00401796 s, 261 MB/s 00:04:32.190 19:43:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:32.190 19:43:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:32.190 19:43:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:04:32.190 19:43:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:04:32.190 19:43:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:32.190 No valid GPT data, bailing 00:04:32.190 19:43:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:32.190 19:43:00 -- scripts/common.sh@391 -- # pt= 00:04:32.190 19:43:00 -- scripts/common.sh@392 -- # return 1 00:04:32.190 19:43:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:32.190 1+0 records in 00:04:32.190 1+0 records out 00:04:32.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403695 s, 260 MB/s 00:04:32.190 19:43:00 -- spdk/autotest.sh@118 -- # sync 00:04:32.190 19:43:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:32.190 19:43:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:32.190 19:43:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:34.090 19:43:02 -- spdk/autotest.sh@124 -- # uname -s 00:04:34.090 19:43:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:34.090 19:43:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:34.090 19:43:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.090 19:43:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.090 19:43:02 -- common/autotest_common.sh@10 -- # set +x 00:04:34.090 ************************************ 00:04:34.090 START TEST setup.sh 00:04:34.090 ************************************ 00:04:34.090 19:43:02 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:34.091 * Looking for test storage... 00:04:34.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.091 19:43:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:34.091 19:43:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:34.091 19:43:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:34.091 19:43:02 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.091 19:43:02 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.091 19:43:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.091 ************************************ 00:04:34.091 START TEST acl 00:04:34.091 ************************************ 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:34.091 * Looking for test storage... 00:04:34.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.091 19:43:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:34.091 19:43:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:34.091 19:43:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:34.091 19:43:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:34.091 19:43:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:34.091 19:43:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:34.091 19:43:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:34.091 19:43:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.091 19:43:02 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.025 19:43:03 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:35.025 19:43:03 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:35.025 19:43:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.025 19:43:03 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:35.025 19:43:03 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.025 19:43:03 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.608 Hugepages 00:04:35.608 node hugesize free / total 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.608 00:04:35.608 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:04:35.608 19:43:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:35.608 19:43:04 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.608 19:43:04 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.608 19:43:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:35.866 ************************************ 00:04:35.866 START TEST denied 00:04:35.866 ************************************ 00:04:35.866 19:43:04 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:35.866 19:43:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:35.866 19:43:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:35.866 19:43:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.866 19:43:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:35.866 19:43:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.799 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.799 19:43:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.057 00:04:37.057 real 0m1.419s 00:04:37.057 user 0m0.568s 00:04:37.057 sys 0m0.798s 00:04:37.057 19:43:05 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.057 19:43:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:37.057 ************************************ 00:04:37.057 END TEST denied 00:04:37.057 ************************************ 00:04:37.315 19:43:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:37.315 19:43:05 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.315 19:43:05 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.315 19:43:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.315 ************************************ 00:04:37.315 START TEST allowed 00:04:37.315 ************************************ 00:04:37.315 19:43:05 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:37.315 19:43:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:37.315 19:43:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:37.315 19:43:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:37.315 19:43:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.315 19:43:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.881 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.881 19:43:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.817 00:04:38.817 real 0m1.437s 00:04:38.817 user 0m0.626s 00:04:38.817 sys 0m0.812s 00:04:38.817 19:43:07 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.817 19:43:07 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:38.817 ************************************ 00:04:38.817 END TEST allowed 00:04:38.817 ************************************ 00:04:38.817 00:04:38.817 real 0m4.643s 00:04:38.817 user 0m2.008s 00:04:38.817 sys 0m2.591s 00:04:38.817 19:43:07 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.817 19:43:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:38.817 ************************************ 00:04:38.817 END TEST acl 00:04:38.817 ************************************ 00:04:38.817 19:43:07 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:38.817 19:43:07 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.817 19:43:07 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.817 19:43:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.817 ************************************ 00:04:38.817 START TEST hugepages 00:04:38.817 ************************************ 00:04:38.817 19:43:07 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:38.817 * Looking for test storage... 00:04:38.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.817 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 6017896 kB' 'MemAvailable: 7400620 kB' 'Buffers: 2436 kB' 'Cached: 1597064 kB' 'SwapCached: 0 kB' 'Active: 436348 kB' 'Inactive: 1268152 kB' 'Active(anon): 115488 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268152 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106444 kB' 'Mapped: 48616 kB' 'Shmem: 10488 kB' 'KReclaimable: 61300 kB' 'Slab: 132196 kB' 'SReclaimable: 61300 kB' 'SUnreclaim: 70896 kB' 'KernelStack: 6256 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 335508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.818 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:38.819 19:43:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:38.819 19:43:07 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.819 19:43:07 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.819 19:43:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.819 ************************************ 00:04:38.819 START TEST default_setup 00:04:38.819 ************************************ 00:04:38.819 19:43:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:38.819 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:38.819 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.819 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:38.819 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:38.819 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:38.819 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.820 19:43:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.676 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.676 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8111288 kB' 'MemAvailable: 9493824 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 452916 kB' 'Inactive: 1268160 kB' 'Active(anon): 132056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 123132 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131712 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70800 kB' 'KernelStack: 6208 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.676 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.677 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8111288 kB' 'MemAvailable: 9493824 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 452796 kB' 'Inactive: 1268160 kB' 'Active(anon): 131936 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123060 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131712 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70800 kB' 'KernelStack: 6224 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.678 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.679 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8111288 kB' 'MemAvailable: 9493824 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 452868 kB' 'Inactive: 1268160 kB' 'Active(anon): 132008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123132 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 6240 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.680 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.681 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:39.682 nr_hugepages=1024 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.682 resv_hugepages=0 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.682 surplus_hugepages=0 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.682 anon_hugepages=0 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8111288 kB' 'MemAvailable: 9493824 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 452788 kB' 'Inactive: 1268160 kB' 'Active(anon): 131928 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 123088 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131700 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70788 kB' 'KernelStack: 6224 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.682 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.943 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.944 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8111044 kB' 'MemUsed: 4130924 kB' 'SwapCached: 0 kB' 'Active: 452928 kB' 'Inactive: 1268160 kB' 'Active(anon): 132068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268160 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1599492 kB' 'Mapped: 48616 kB' 'AnonPages: 123172 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60912 kB' 'Slab: 131700 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.945 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.946 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.947 node0=1024 expecting 1024 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.947 00:04:39.947 real 0m0.977s 00:04:39.947 user 0m0.448s 00:04:39.947 sys 0m0.478s 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.947 19:43:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:39.947 ************************************ 00:04:39.947 END TEST default_setup 00:04:39.947 ************************************ 00:04:39.947 19:43:08 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:39.947 19:43:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.947 19:43:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.947 19:43:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.947 ************************************ 00:04:39.947 START TEST per_node_1G_alloc 00:04:39.947 ************************************ 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.947 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.208 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.208 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.208 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9161636 kB' 'MemAvailable: 10544180 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 453044 kB' 'Inactive: 1268168 kB' 'Active(anon): 132184 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123216 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131700 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70788 kB' 'KernelStack: 6252 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.209 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9161636 kB' 'MemAvailable: 10544180 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 453268 kB' 'Inactive: 1268168 kB' 'Active(anon): 132408 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123440 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131696 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70784 kB' 'KernelStack: 6236 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.210 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.211 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9161636 kB' 'MemAvailable: 10544180 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 453060 kB' 'Inactive: 1268168 kB' 'Active(anon): 132200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 48624 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131708 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70796 kB' 'KernelStack: 6260 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.474 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.475 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:40.476 nr_hugepages=512 00:04:40.476 resv_hugepages=0 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.476 surplus_hugepages=0 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.476 anon_hugepages=0 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9161636 kB' 'MemAvailable: 10544180 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 452992 kB' 'Inactive: 1268168 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123160 kB' 'Mapped: 48624 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 6244 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:40.476 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.477 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.478 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9161636 kB' 'MemUsed: 3080332 kB' 'SwapCached: 0 kB' 'Active: 453420 kB' 'Inactive: 1268168 kB' 'Active(anon): 132560 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1599492 kB' 'Mapped: 48624 kB' 'AnonPages: 123664 kB' 'Shmem: 10464 kB' 'KernelStack: 6260 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60912 kB' 'Slab: 131708 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.479 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:40.480 node0=512 expecting 512 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:40.480 00:04:40.480 real 0m0.528s 00:04:40.480 user 0m0.251s 00:04:40.480 sys 0m0.311s 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:40.480 19:43:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:40.480 ************************************ 00:04:40.480 END TEST per_node_1G_alloc 00:04:40.480 ************************************ 00:04:40.480 19:43:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:40.480 19:43:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.480 19:43:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.480 19:43:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:40.480 ************************************ 00:04:40.480 START TEST even_2G_alloc 00:04:40.480 ************************************ 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.480 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.739 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.739 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.003 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8109296 kB' 'MemAvailable: 9491840 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 453364 kB' 'Inactive: 1268168 kB' 'Active(anon): 132504 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123412 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131700 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70788 kB' 'KernelStack: 6228 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.004 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8109952 kB' 'MemAvailable: 9492496 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 452884 kB' 'Inactive: 1268168 kB' 'Active(anon): 132024 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123168 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 6240 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8109952 kB' 'MemAvailable: 9492496 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 452872 kB' 'Inactive: 1268168 kB' 'Active(anon): 132012 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123116 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 6224 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.008 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.009 nr_hugepages=1024 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.009 resv_hugepages=0 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.009 surplus_hugepages=0 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.009 anon_hugepages=0 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8109952 kB' 'MemAvailable: 9492496 kB' 'Buffers: 2436 kB' 'Cached: 1597056 kB' 'SwapCached: 0 kB' 'Active: 452848 kB' 'Inactive: 1268168 kB' 'Active(anon): 131988 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268168 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 123136 kB' 'Mapped: 48616 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131696 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70784 kB' 'KernelStack: 6256 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.009 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.010 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8110204 kB' 'MemUsed: 4131764 kB' 'SwapCached: 0 kB' 'Active: 452328 kB' 'Inactive: 1268164 kB' 'Active(anon): 131468 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268164 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1599488 kB' 'Mapped: 48616 kB' 'AnonPages: 122596 kB' 'Shmem: 10464 kB' 'KernelStack: 6160 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60912 kB' 'Slab: 131688 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.011 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.012 node0=1024 expecting 1024 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.012 00:04:41.012 real 0m0.552s 00:04:41.012 user 0m0.276s 00:04:41.012 sys 0m0.302s 00:04:41.012 19:43:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.013 19:43:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.013 ************************************ 00:04:41.013 END TEST even_2G_alloc 00:04:41.013 ************************************ 00:04:41.013 19:43:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:41.013 19:43:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.013 19:43:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.013 19:43:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.013 ************************************ 00:04:41.013 START TEST odd_alloc 00:04:41.013 ************************************ 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.013 19:43:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.586 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.586 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8108008 kB' 'MemAvailable: 9490556 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452988 kB' 'Inactive: 1268172 kB' 'Active(anon): 132128 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123296 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131696 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70784 kB' 'KernelStack: 6328 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.586 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.587 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8108324 kB' 'MemAvailable: 9490872 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 453220 kB' 'Inactive: 1268172 kB' 'Active(anon): 132360 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123516 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 6216 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54532 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.588 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.589 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8108072 kB' 'MemAvailable: 9490620 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452712 kB' 'Inactive: 1268172 kB' 'Active(anon): 131852 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122964 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 6208 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.590 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.591 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.592 nr_hugepages=1025 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:41.592 resv_hugepages=0 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.592 surplus_hugepages=0 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.592 anon_hugepages=0 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8108356 kB' 'MemAvailable: 9490904 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452684 kB' 'Inactive: 1268172 kB' 'Active(anon): 131824 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 6192 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.592 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.593 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8108356 kB' 'MemUsed: 4133612 kB' 'SwapCached: 0 kB' 'Active: 452944 kB' 'Inactive: 1268172 kB' 'Active(anon): 132084 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1599496 kB' 'Mapped: 48620 kB' 'AnonPages: 123200 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.594 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.595 node0=1025 expecting 1025 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:41.595 00:04:41.595 real 0m0.522s 00:04:41.595 user 0m0.265s 00:04:41.595 sys 0m0.289s 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.595 19:43:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.595 ************************************ 00:04:41.595 END TEST odd_alloc 00:04:41.595 ************************************ 00:04:41.595 19:43:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:41.595 19:43:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.595 19:43:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.595 19:43:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.595 ************************************ 00:04:41.595 START TEST custom_alloc 00:04:41.595 ************************************ 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.595 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.596 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.168 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.168 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.168 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:42.168 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:42.168 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.168 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.168 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.168 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.168 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.168 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9157828 kB' 'MemAvailable: 10540376 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452868 kB' 'Inactive: 1268172 kB' 'Active(anon): 132008 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123384 kB' 'Mapped: 48704 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131696 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70784 kB' 'KernelStack: 6196 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.169 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9157576 kB' 'MemAvailable: 10540124 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452704 kB' 'Inactive: 1268172 kB' 'Active(anon): 131844 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123208 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131708 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70796 kB' 'KernelStack: 6164 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.170 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.171 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9157324 kB' 'MemAvailable: 10539872 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452628 kB' 'Inactive: 1268172 kB' 'Active(anon): 131768 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122844 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131700 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70788 kB' 'KernelStack: 6192 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.172 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.173 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.174 nr_hugepages=512 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:42.174 resv_hugepages=0 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.174 surplus_hugepages=0 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.174 anon_hugepages=0 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9157324 kB' 'MemAvailable: 10539872 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452628 kB' 'Inactive: 1268172 kB' 'Active(anon): 131768 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123104 kB' 'Mapped: 48620 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131684 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70772 kB' 'KernelStack: 6192 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.174 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.175 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 9157324 kB' 'MemUsed: 3084644 kB' 'SwapCached: 0 kB' 'Active: 452512 kB' 'Inactive: 1268172 kB' 'Active(anon): 131652 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1599496 kB' 'Mapped: 48620 kB' 'AnonPages: 122756 kB' 'Shmem: 10464 kB' 'KernelStack: 6244 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60912 kB' 'Slab: 131680 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.176 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.177 node0=512 expecting 512 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:42.177 00:04:42.177 real 0m0.533s 00:04:42.177 user 0m0.270s 00:04:42.177 sys 0m0.295s 00:04:42.177 19:43:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.178 19:43:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.178 ************************************ 00:04:42.178 END TEST custom_alloc 00:04:42.178 ************************************ 00:04:42.178 19:43:10 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:42.178 19:43:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.178 19:43:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.178 19:43:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.178 ************************************ 00:04:42.178 START TEST no_shrink_alloc 00:04:42.178 ************************************ 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.178 19:43:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.436 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.436 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8107844 kB' 'MemAvailable: 9490392 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 453392 kB' 'Inactive: 1268172 kB' 'Active(anon): 132532 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123444 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'KernelStack: 6248 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.700 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.701 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8107844 kB' 'MemAvailable: 9490392 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452996 kB' 'Inactive: 1268172 kB' 'Active(anon): 132136 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123276 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131712 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70800 kB' 'KernelStack: 6240 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.702 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.703 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8107844 kB' 'MemAvailable: 9490392 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452964 kB' 'Inactive: 1268172 kB' 'Active(anon): 132104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123204 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131712 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70800 kB' 'KernelStack: 6208 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.704 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.705 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.706 nr_hugepages=1024 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.706 resv_hugepages=0 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.706 surplus_hugepages=0 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.706 anon_hugepages=0 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8107844 kB' 'MemAvailable: 9490392 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452928 kB' 'Inactive: 1268172 kB' 'Active(anon): 132068 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123168 kB' 'Mapped: 48612 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131712 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70800 kB' 'KernelStack: 6192 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.706 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.707 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.708 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8107844 kB' 'MemUsed: 4134124 kB' 'SwapCached: 0 kB' 'Active: 452892 kB' 'Inactive: 1268172 kB' 'Active(anon): 132032 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1599496 kB' 'Mapped: 48612 kB' 'AnonPages: 123172 kB' 'Shmem: 10464 kB' 'KernelStack: 6192 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60912 kB' 'Slab: 131704 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.709 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.710 node0=1024 expecting 1024 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.710 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.968 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.968 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.968 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:42.968 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:42.968 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8104852 kB' 'MemAvailable: 9487400 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 453580 kB' 'Inactive: 1268172 kB' 'Active(anon): 132720 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123836 kB' 'Mapped: 48740 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131720 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70808 kB' 'KernelStack: 6260 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.232 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.233 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8104852 kB' 'MemAvailable: 9487400 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452916 kB' 'Inactive: 1268172 kB' 'Active(anon): 132056 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123160 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131700 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70788 kB' 'KernelStack: 6212 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.234 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.235 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8104852 kB' 'MemAvailable: 9487400 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 452892 kB' 'Inactive: 1268172 kB' 'Active(anon): 132032 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123108 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131700 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70788 kB' 'KernelStack: 6180 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 352696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.236 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.237 nr_hugepages=1024 00:04:43.237 resv_hugepages=0 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.237 surplus_hugepages=0 00:04:43.237 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.238 anon_hugepages=0 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8105372 kB' 'MemAvailable: 9487920 kB' 'Buffers: 2436 kB' 'Cached: 1597060 kB' 'SwapCached: 0 kB' 'Active: 448028 kB' 'Inactive: 1268172 kB' 'Active(anon): 127168 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118244 kB' 'Mapped: 48280 kB' 'Shmem: 10464 kB' 'KReclaimable: 60912 kB' 'Slab: 131676 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70764 kB' 'KernelStack: 6132 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54500 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 5076992 kB' 'DirectMap1G: 9437184 kB' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.238 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:43.239 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8113580 kB' 'MemUsed: 4128388 kB' 'SwapCached: 0 kB' 'Active: 447648 kB' 'Inactive: 1268172 kB' 'Active(anon): 126788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1268172 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1599496 kB' 'Mapped: 47940 kB' 'AnonPages: 118116 kB' 'Shmem: 10464 kB' 'KernelStack: 6080 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 60912 kB' 'Slab: 131552 kB' 'SReclaimable: 60912 kB' 'SUnreclaim: 70640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.240 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.241 node0=1024 expecting 1024 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.241 00:04:43.241 real 0m0.999s 00:04:43.241 user 0m0.531s 00:04:43.241 sys 0m0.539s 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.241 19:43:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:43.241 ************************************ 00:04:43.241 END TEST no_shrink_alloc 00:04:43.241 ************************************ 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:43.241 19:43:11 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:43.241 ************************************ 00:04:43.241 END TEST hugepages 00:04:43.241 ************************************ 00:04:43.241 00:04:43.241 real 0m4.546s 00:04:43.241 user 0m2.204s 00:04:43.241 sys 0m2.471s 00:04:43.241 19:43:11 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.241 19:43:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.241 19:43:11 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:43.241 19:43:11 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.241 19:43:11 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.241 19:43:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.241 ************************************ 00:04:43.241 START TEST driver 00:04:43.241 ************************************ 00:04:43.241 19:43:11 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:43.500 * Looking for test storage... 00:04:43.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.500 19:43:11 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:43.500 19:43:11 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.500 19:43:11 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.068 19:43:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:44.068 19:43:12 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.068 19:43:12 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.068 19:43:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:44.068 ************************************ 00:04:44.068 START TEST guess_driver 00:04:44.068 ************************************ 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:44.068 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:44.068 Looking for driver=uio_pci_generic 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.068 19:43:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.635 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.892 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:44.892 19:43:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:44.892 19:43:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.892 19:43:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.459 00:04:45.459 real 0m1.342s 00:04:45.459 user 0m0.515s 00:04:45.459 sys 0m0.844s 00:04:45.459 19:43:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.459 19:43:13 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.459 ************************************ 00:04:45.459 END TEST guess_driver 00:04:45.459 ************************************ 00:04:45.459 00:04:45.459 real 0m2.035s 00:04:45.459 user 0m0.762s 00:04:45.459 sys 0m1.342s 00:04:45.459 19:43:13 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.459 19:43:13 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.459 ************************************ 00:04:45.459 END TEST driver 00:04:45.459 ************************************ 00:04:45.459 19:43:13 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:45.459 19:43:13 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.459 19:43:13 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.459 19:43:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.459 ************************************ 00:04:45.459 START TEST devices 00:04:45.459 ************************************ 00:04:45.459 19:43:13 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:45.459 * Looking for test storage... 00:04:45.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:45.459 19:43:14 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:45.459 19:43:14 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:45.459 19:43:14 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.459 19:43:14 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:46.410 19:43:14 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:46.410 No valid GPT data, bailing 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:46.410 No valid GPT data, bailing 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:46.410 No valid GPT data, bailing 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:46.410 19:43:14 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:46.410 19:43:14 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:46.410 19:43:14 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:46.410 No valid GPT data, bailing 00:04:46.410 19:43:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:46.410 19:43:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:46.410 19:43:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:46.410 19:43:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:46.410 19:43:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:46.410 19:43:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:46.410 19:43:15 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:46.411 19:43:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:46.411 19:43:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:46.411 19:43:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:46.411 19:43:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:46.411 19:43:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:46.411 19:43:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:46.411 19:43:15 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.411 19:43:15 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.411 19:43:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.411 ************************************ 00:04:46.411 START TEST nvme_mount 00:04:46.411 ************************************ 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:46.411 19:43:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:47.787 Creating new GPT entries in memory. 00:04:47.787 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:47.787 other utilities. 00:04:47.787 19:43:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:47.787 19:43:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.787 19:43:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.787 19:43:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.787 19:43:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:48.722 Creating new GPT entries in memory. 00:04:48.722 The operation has completed successfully. 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56981 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:48.722 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.981 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.981 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.240 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:49.240 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:49.240 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.240 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:49.240 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:49.240 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:49.240 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.240 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:49.240 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:49.240 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.499 19:43:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.499 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.499 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:49.499 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:49.499 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.499 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.499 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.757 19:43:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.015 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.015 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:50.015 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.015 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.015 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.015 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:50.272 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.671 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.671 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.671 19:43:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.671 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.671 00:04:50.671 real 0m3.901s 00:04:50.671 user 0m0.643s 00:04:50.671 sys 0m1.005s 00:04:50.671 19:43:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.671 19:43:18 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:50.671 ************************************ 00:04:50.671 END TEST nvme_mount 00:04:50.671 ************************************ 00:04:50.671 19:43:18 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:50.671 19:43:18 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.671 19:43:18 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.671 19:43:18 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.671 ************************************ 00:04:50.671 START TEST dm_mount 00:04:50.671 ************************************ 00:04:50.671 19:43:18 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:50.671 19:43:18 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:50.671 19:43:18 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:50.671 19:43:18 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.671 19:43:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:51.619 Creating new GPT entries in memory. 00:04:51.619 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:51.619 other utilities. 00:04:51.619 19:43:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:51.619 19:43:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.619 19:43:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.619 19:43:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.619 19:43:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:52.554 Creating new GPT entries in memory. 00:04:52.554 The operation has completed successfully. 00:04:52.554 19:43:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:52.554 19:43:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.554 19:43:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.554 19:43:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.554 19:43:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:53.489 The operation has completed successfully. 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57417 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.489 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.748 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.748 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:53.748 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.748 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.748 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:53.748 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.008 19:43:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.267 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.267 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:54.267 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:54.267 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.267 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.267 19:43:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:54.526 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:54.526 19:43:23 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:54.786 00:04:54.786 real 0m4.202s 00:04:54.786 user 0m0.488s 00:04:54.786 sys 0m0.686s 00:04:54.786 19:43:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:54.786 19:43:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:54.786 ************************************ 00:04:54.786 END TEST dm_mount 00:04:54.786 ************************************ 00:04:54.786 19:43:23 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:54.786 19:43:23 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:54.786 19:43:23 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.786 19:43:23 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:54.786 19:43:23 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:54.786 19:43:23 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:54.786 19:43:23 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.046 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.046 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:55.046 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:55.046 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:55.046 19:43:23 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:55.046 19:43:23 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:55.046 19:43:23 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:55.046 19:43:23 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.046 19:43:23 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:55.046 19:43:23 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.046 19:43:23 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:55.046 ************************************ 00:04:55.046 END TEST devices 00:04:55.046 ************************************ 00:04:55.046 00:04:55.046 real 0m9.595s 00:04:55.046 user 0m1.780s 00:04:55.046 sys 0m2.249s 00:04:55.046 19:43:23 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.046 19:43:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:55.046 00:04:55.046 real 0m21.091s 00:04:55.046 user 0m6.857s 00:04:55.046 sys 0m8.820s 00:04:55.046 19:43:23 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:55.046 19:43:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:55.046 ************************************ 00:04:55.046 END TEST setup.sh 00:04:55.046 ************************************ 00:04:55.046 19:43:23 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:55.614 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.614 Hugepages 00:04:55.614 node hugesize free / total 00:04:55.614 node0 1048576kB 0 / 0 00:04:55.614 node0 2048kB 2048 / 2048 00:04:55.614 00:04:55.614 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.873 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:55.873 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:55.873 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:55.873 19:43:24 -- spdk/autotest.sh@130 -- # uname -s 00:04:55.873 19:43:24 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:55.873 19:43:24 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:55.873 19:43:24 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.809 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.809 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:56.809 19:43:25 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:57.744 19:43:26 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:57.744 19:43:26 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:57.744 19:43:26 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:57.744 19:43:26 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:57.744 19:43:26 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:57.744 19:43:26 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:57.744 19:43:26 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.744 19:43:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.744 19:43:26 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.002 19:43:26 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:58.002 19:43:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:58.002 19:43:26 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:58.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.261 Waiting for block devices as requested 00:04:58.261 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:58.261 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:58.519 19:43:26 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:58.519 19:43:26 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:58.519 19:43:26 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:58.519 19:43:26 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:58.519 19:43:26 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:58.519 19:43:26 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:58.519 19:43:26 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:58.519 19:43:26 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:58.519 19:43:26 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:58.519 19:43:26 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:58.519 19:43:26 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:58.519 19:43:26 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:58.519 19:43:26 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:58.519 19:43:26 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:58.519 19:43:26 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:58.519 19:43:26 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:58.519 19:43:26 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:58.519 19:43:26 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:58.519 19:43:26 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:58.519 19:43:26 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:58.519 19:43:26 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:58.519 19:43:26 -- common/autotest_common.sh@1557 -- # continue 00:04:58.519 19:43:26 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:58.519 19:43:26 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:58.519 19:43:26 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:58.519 19:43:26 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:58.519 19:43:26 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:58.519 19:43:26 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:58.519 19:43:26 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:58.519 19:43:26 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:58.519 19:43:26 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:58.519 19:43:26 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:58.519 19:43:26 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:58.519 19:43:26 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:58.519 19:43:26 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:58.519 19:43:26 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:58.519 19:43:26 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:58.519 19:43:26 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:58.519 19:43:27 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:58.519 19:43:26 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:58.519 19:43:27 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:58.519 19:43:27 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:58.519 19:43:27 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:58.519 19:43:27 -- common/autotest_common.sh@1557 -- # continue 00:04:58.519 19:43:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:58.519 19:43:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.519 19:43:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.519 19:43:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:58.519 19:43:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.519 19:43:27 -- common/autotest_common.sh@10 -- # set +x 00:04:58.519 19:43:27 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:59.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:59.345 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.345 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.345 19:43:27 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:59.345 19:43:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:59.345 19:43:27 -- common/autotest_common.sh@10 -- # set +x 00:04:59.345 19:43:27 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:59.345 19:43:27 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:59.345 19:43:27 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:59.345 19:43:27 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:59.345 19:43:27 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:59.345 19:43:27 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:59.345 19:43:27 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:59.345 19:43:27 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:59.345 19:43:27 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.345 19:43:27 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:59.345 19:43:27 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:59.604 19:43:28 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:59.604 19:43:28 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:59.604 19:43:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:59.604 19:43:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:59.604 19:43:28 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:59.604 19:43:28 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:59.604 19:43:28 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:59.604 19:43:28 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:59.604 19:43:28 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:59.604 19:43:28 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:59.604 19:43:28 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:59.604 19:43:28 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:59.604 19:43:28 -- common/autotest_common.sh@1593 -- # return 0 00:04:59.604 19:43:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:59.604 19:43:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:59.604 19:43:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:59.604 19:43:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:59.604 19:43:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:59.604 19:43:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:59.604 19:43:28 -- common/autotest_common.sh@10 -- # set +x 00:04:59.604 19:43:28 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:59.604 19:43:28 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:59.604 19:43:28 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:59.604 19:43:28 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:59.604 19:43:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.604 19:43:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.604 19:43:28 -- common/autotest_common.sh@10 -- # set +x 00:04:59.604 ************************************ 00:04:59.604 START TEST env 00:04:59.604 ************************************ 00:04:59.604 19:43:28 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:59.604 * Looking for test storage... 00:04:59.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:59.604 19:43:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:59.604 19:43:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.604 19:43:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.604 19:43:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.604 ************************************ 00:04:59.604 START TEST env_memory 00:04:59.604 ************************************ 00:04:59.604 19:43:28 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:59.604 00:04:59.604 00:04:59.604 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.604 http://cunit.sourceforge.net/ 00:04:59.604 00:04:59.604 00:04:59.604 Suite: memory 00:04:59.604 Test: alloc and free memory map ...[2024-07-24 19:43:28.200203] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:59.604 passed 00:04:59.604 Test: mem map translation ...[2024-07-24 19:43:28.232283] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:59.604 [2024-07-24 19:43:28.232380] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:59.604 [2024-07-24 19:43:28.232461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:59.604 [2024-07-24 19:43:28.232479] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:59.863 passed 00:04:59.863 Test: mem map registration ...[2024-07-24 19:43:28.296807] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:59.863 [2024-07-24 19:43:28.296886] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:59.863 passed 00:04:59.863 Test: mem map adjacent registrations ...passed 00:04:59.863 00:04:59.863 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.863 suites 1 1 n/a 0 0 00:04:59.863 tests 4 4 4 0 0 00:04:59.863 asserts 152 152 152 0 n/a 00:04:59.863 00:04:59.863 Elapsed time = 0.216 seconds 00:04:59.863 00:04:59.863 real 0m0.230s 00:04:59.863 user 0m0.219s 00:04:59.863 sys 0m0.008s 00:04:59.863 19:43:28 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.863 19:43:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.863 ************************************ 00:04:59.863 END TEST env_memory 00:04:59.863 ************************************ 00:04:59.863 19:43:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.863 19:43:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.863 19:43:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.863 19:43:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.863 ************************************ 00:04:59.863 START TEST env_vtophys 00:04:59.863 ************************************ 00:04:59.863 19:43:28 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.863 EAL: lib.eal log level changed from notice to debug 00:04:59.863 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 1 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 2 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 3 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 4 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 5 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 6 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 7 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 8 as core 0 on socket 0 00:04:59.863 EAL: Detected lcore 9 as core 0 on socket 0 00:04:59.863 EAL: Maximum logical cores by configuration: 128 00:04:59.863 EAL: Detected CPU lcores: 10 00:04:59.863 EAL: Detected NUMA nodes: 1 00:04:59.863 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:59.863 EAL: Detected shared linkage of DPDK 00:04:59.863 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.863 EAL: Selected IOVA mode 'PA' 00:04:59.863 EAL: Probing VFIO support... 00:04:59.863 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.863 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:59.863 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.863 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.863 EAL: Setting up physically contiguous memory... 00:04:59.863 EAL: Setting maximum number of open files to 524288 00:04:59.863 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.863 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.863 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.863 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.863 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.863 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.863 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.863 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.863 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.863 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.863 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.863 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.863 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.863 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.863 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.863 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.863 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.863 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.863 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.863 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.863 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.863 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.863 EAL: Hugepages will be freed exactly as allocated. 00:04:59.863 EAL: No shared files mode enabled, IPC is disabled 00:04:59.863 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: TSC frequency is ~2200000 KHz 00:05:00.122 EAL: Main lcore 0 is ready (tid=7f5068858a00;cpuset=[0]) 00:05:00.122 EAL: Trying to obtain current memory policy. 00:05:00.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.122 EAL: Restoring previous memory policy: 0 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was expanded by 2MB 00:05:00.122 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:00.122 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:00.122 EAL: Mem event callback 'spdk:(nil)' registered 00:05:00.122 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:00.122 00:05:00.122 00:05:00.122 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.122 http://cunit.sourceforge.net/ 00:05:00.122 00:05:00.122 00:05:00.122 Suite: components_suite 00:05:00.122 Test: vtophys_malloc_test ...passed 00:05:00.122 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:00.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.122 EAL: Restoring previous memory policy: 4 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was expanded by 4MB 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was shrunk by 4MB 00:05:00.122 EAL: Trying to obtain current memory policy. 00:05:00.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.122 EAL: Restoring previous memory policy: 4 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was expanded by 6MB 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was shrunk by 6MB 00:05:00.122 EAL: Trying to obtain current memory policy. 00:05:00.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.122 EAL: Restoring previous memory policy: 4 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was expanded by 10MB 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was shrunk by 10MB 00:05:00.122 EAL: Trying to obtain current memory policy. 00:05:00.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.122 EAL: Restoring previous memory policy: 4 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was expanded by 18MB 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.122 EAL: request: mp_malloc_sync 00:05:00.122 EAL: No shared files mode enabled, IPC is disabled 00:05:00.122 EAL: Heap on socket 0 was shrunk by 18MB 00:05:00.122 EAL: Trying to obtain current memory policy. 00:05:00.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.122 EAL: Restoring previous memory policy: 4 00:05:00.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.123 EAL: request: mp_malloc_sync 00:05:00.123 EAL: No shared files mode enabled, IPC is disabled 00:05:00.123 EAL: Heap on socket 0 was expanded by 34MB 00:05:00.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.123 EAL: request: mp_malloc_sync 00:05:00.123 EAL: No shared files mode enabled, IPC is disabled 00:05:00.123 EAL: Heap on socket 0 was shrunk by 34MB 00:05:00.123 EAL: Trying to obtain current memory policy. 00:05:00.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.123 EAL: Restoring previous memory policy: 4 00:05:00.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.123 EAL: request: mp_malloc_sync 00:05:00.123 EAL: No shared files mode enabled, IPC is disabled 00:05:00.123 EAL: Heap on socket 0 was expanded by 66MB 00:05:00.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.123 EAL: request: mp_malloc_sync 00:05:00.123 EAL: No shared files mode enabled, IPC is disabled 00:05:00.123 EAL: Heap on socket 0 was shrunk by 66MB 00:05:00.123 EAL: Trying to obtain current memory policy. 00:05:00.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.123 EAL: Restoring previous memory policy: 4 00:05:00.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.123 EAL: request: mp_malloc_sync 00:05:00.123 EAL: No shared files mode enabled, IPC is disabled 00:05:00.123 EAL: Heap on socket 0 was expanded by 130MB 00:05:00.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.123 EAL: request: mp_malloc_sync 00:05:00.123 EAL: No shared files mode enabled, IPC is disabled 00:05:00.123 EAL: Heap on socket 0 was shrunk by 130MB 00:05:00.123 EAL: Trying to obtain current memory policy. 00:05:00.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.381 EAL: Restoring previous memory policy: 4 00:05:00.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.381 EAL: request: mp_malloc_sync 00:05:00.381 EAL: No shared files mode enabled, IPC is disabled 00:05:00.381 EAL: Heap on socket 0 was expanded by 258MB 00:05:00.381 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.381 EAL: request: mp_malloc_sync 00:05:00.381 EAL: No shared files mode enabled, IPC is disabled 00:05:00.381 EAL: Heap on socket 0 was shrunk by 258MB 00:05:00.381 EAL: Trying to obtain current memory policy. 00:05:00.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.639 EAL: Restoring previous memory policy: 4 00:05:00.639 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.639 EAL: request: mp_malloc_sync 00:05:00.639 EAL: No shared files mode enabled, IPC is disabled 00:05:00.639 EAL: Heap on socket 0 was expanded by 514MB 00:05:00.639 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.639 EAL: request: mp_malloc_sync 00:05:00.639 EAL: No shared files mode enabled, IPC is disabled 00:05:00.639 EAL: Heap on socket 0 was shrunk by 514MB 00:05:00.639 EAL: Trying to obtain current memory policy. 00:05:00.639 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.896 EAL: Restoring previous memory policy: 4 00:05:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.896 EAL: request: mp_malloc_sync 00:05:00.896 EAL: No shared files mode enabled, IPC is disabled 00:05:00.896 EAL: Heap on socket 0 was expanded by 1026MB 00:05:01.153 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.412 passed 00:05:01.412 00:05:01.412 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.412 suites 1 1 n/a 0 0 00:05:01.412 tests 2 2 2 0 0 00:05:01.412 asserts 5302 5302 5302 0 n/a 00:05:01.412 00:05:01.412 Elapsed time = 1.256 seconds 00:05:01.412 EAL: request: mp_malloc_sync 00:05:01.412 EAL: No shared files mode enabled, IPC is disabled 00:05:01.412 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:01.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.412 EAL: request: mp_malloc_sync 00:05:01.412 EAL: No shared files mode enabled, IPC is disabled 00:05:01.412 EAL: Heap on socket 0 was shrunk by 2MB 00:05:01.412 EAL: No shared files mode enabled, IPC is disabled 00:05:01.412 EAL: No shared files mode enabled, IPC is disabled 00:05:01.412 EAL: No shared files mode enabled, IPC is disabled 00:05:01.412 00:05:01.412 real 0m1.458s 00:05:01.412 user 0m0.786s 00:05:01.412 sys 0m0.533s 00:05:01.413 19:43:29 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.413 19:43:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:01.413 ************************************ 00:05:01.413 END TEST env_vtophys 00:05:01.413 ************************************ 00:05:01.413 19:43:29 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:01.413 19:43:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.413 19:43:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.413 19:43:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.413 ************************************ 00:05:01.413 START TEST env_pci 00:05:01.413 ************************************ 00:05:01.413 19:43:29 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:01.413 00:05:01.413 00:05:01.413 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.413 http://cunit.sourceforge.net/ 00:05:01.413 00:05:01.413 00:05:01.413 Suite: pci 00:05:01.413 Test: pci_hook ...[2024-07-24 19:43:29.947051] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58616 has claimed it 00:05:01.413 passed 00:05:01.413 00:05:01.413 EAL: Cannot find device (10000:00:01.0) 00:05:01.413 EAL: Failed to attach device on primary process 00:05:01.413 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.413 suites 1 1 n/a 0 0 00:05:01.413 tests 1 1 1 0 0 00:05:01.413 asserts 25 25 25 0 n/a 00:05:01.413 00:05:01.413 Elapsed time = 0.002 seconds 00:05:01.413 00:05:01.413 real 0m0.022s 00:05:01.413 user 0m0.012s 00:05:01.413 sys 0m0.009s 00:05:01.413 19:43:29 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.413 19:43:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:01.413 ************************************ 00:05:01.413 END TEST env_pci 00:05:01.413 ************************************ 00:05:01.413 19:43:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:01.413 19:43:29 env -- env/env.sh@15 -- # uname 00:05:01.413 19:43:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:01.413 19:43:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:01.413 19:43:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:01.413 19:43:29 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:01.413 19:43:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.413 19:43:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.413 ************************************ 00:05:01.413 START TEST env_dpdk_post_init 00:05:01.413 ************************************ 00:05:01.413 19:43:30 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:01.413 EAL: Detected CPU lcores: 10 00:05:01.413 EAL: Detected NUMA nodes: 1 00:05:01.413 EAL: Detected shared linkage of DPDK 00:05:01.413 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.413 EAL: Selected IOVA mode 'PA' 00:05:01.673 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.673 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:01.673 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:01.673 Starting DPDK initialization... 00:05:01.673 Starting SPDK post initialization... 00:05:01.673 SPDK NVMe probe 00:05:01.673 Attaching to 0000:00:10.0 00:05:01.673 Attaching to 0000:00:11.0 00:05:01.673 Attached to 0000:00:10.0 00:05:01.673 Attached to 0000:00:11.0 00:05:01.673 Cleaning up... 00:05:01.673 00:05:01.673 real 0m0.187s 00:05:01.673 user 0m0.042s 00:05:01.673 sys 0m0.044s 00:05:01.673 19:43:30 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.673 19:43:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:01.673 ************************************ 00:05:01.673 END TEST env_dpdk_post_init 00:05:01.673 ************************************ 00:05:01.673 19:43:30 env -- env/env.sh@26 -- # uname 00:05:01.673 19:43:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.673 19:43:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.673 19:43:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.673 19:43:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.673 19:43:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.673 ************************************ 00:05:01.673 START TEST env_mem_callbacks 00:05:01.673 ************************************ 00:05:01.673 19:43:30 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.673 EAL: Detected CPU lcores: 10 00:05:01.673 EAL: Detected NUMA nodes: 1 00:05:01.673 EAL: Detected shared linkage of DPDK 00:05:01.673 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.673 EAL: Selected IOVA mode 'PA' 00:05:01.931 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.931 00:05:01.931 00:05:01.931 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.931 http://cunit.sourceforge.net/ 00:05:01.931 00:05:01.931 00:05:01.931 Suite: memory 00:05:01.931 Test: test ... 00:05:01.931 register 0x200000200000 2097152 00:05:01.931 malloc 3145728 00:05:01.931 register 0x200000400000 4194304 00:05:01.931 buf 0x200000500000 len 3145728 PASSED 00:05:01.931 malloc 64 00:05:01.931 buf 0x2000004fff40 len 64 PASSED 00:05:01.931 malloc 4194304 00:05:01.931 register 0x200000800000 6291456 00:05:01.931 buf 0x200000a00000 len 4194304 PASSED 00:05:01.931 free 0x200000500000 3145728 00:05:01.931 free 0x2000004fff40 64 00:05:01.931 unregister 0x200000400000 4194304 PASSED 00:05:01.931 free 0x200000a00000 4194304 00:05:01.931 unregister 0x200000800000 6291456 PASSED 00:05:01.931 malloc 8388608 00:05:01.931 register 0x200000400000 10485760 00:05:01.931 buf 0x200000600000 len 8388608 PASSED 00:05:01.931 free 0x200000600000 8388608 00:05:01.931 unregister 0x200000400000 10485760 PASSED 00:05:01.931 passed 00:05:01.931 00:05:01.931 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.931 suites 1 1 n/a 0 0 00:05:01.931 tests 1 1 1 0 0 00:05:01.931 asserts 15 15 15 0 n/a 00:05:01.931 00:05:01.931 Elapsed time = 0.007 seconds 00:05:01.931 00:05:01.931 real 0m0.140s 00:05:01.931 user 0m0.015s 00:05:01.931 sys 0m0.024s 00:05:01.931 19:43:30 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.931 19:43:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.931 ************************************ 00:05:01.931 END TEST env_mem_callbacks 00:05:01.931 ************************************ 00:05:01.931 00:05:01.931 real 0m2.354s 00:05:01.931 user 0m1.179s 00:05:01.931 sys 0m0.819s 00:05:01.931 19:43:30 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.931 19:43:30 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.931 ************************************ 00:05:01.931 END TEST env 00:05:01.931 ************************************ 00:05:01.931 19:43:30 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.931 19:43:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.931 19:43:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.931 19:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:01.931 ************************************ 00:05:01.931 START TEST rpc 00:05:01.931 ************************************ 00:05:01.931 19:43:30 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.931 * Looking for test storage... 00:05:01.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.931 19:43:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58720 00:05:01.931 19:43:30 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:01.931 19:43:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.931 19:43:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58720 00:05:01.931 19:43:30 rpc -- common/autotest_common.sh@831 -- # '[' -z 58720 ']' 00:05:01.931 19:43:30 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.931 19:43:30 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:01.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.931 19:43:30 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.931 19:43:30 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:01.931 19:43:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.189 [2024-07-24 19:43:30.620619] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:02.189 [2024-07-24 19:43:30.620770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58720 ] 00:05:02.189 [2024-07-24 19:43:30.764029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.445 [2024-07-24 19:43:30.882250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.445 [2024-07-24 19:43:30.882329] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58720' to capture a snapshot of events at runtime. 00:05:02.445 [2024-07-24 19:43:30.882349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.445 [2024-07-24 19:43:30.882374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.445 [2024-07-24 19:43:30.882385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58720 for offline analysis/debug. 00:05:02.445 [2024-07-24 19:43:30.882435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.445 [2024-07-24 19:43:30.936266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.010 19:43:31 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.010 19:43:31 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:03.010 19:43:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.010 19:43:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.010 19:43:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:03.010 19:43:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:03.010 19:43:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.010 19:43:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.010 19:43:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.010 ************************************ 00:05:03.010 START TEST rpc_integrity 00:05:03.010 ************************************ 00:05:03.010 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:03.010 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.010 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.010 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.010 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.010 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.010 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.268 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.268 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.268 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.268 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.268 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.268 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:03.268 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.268 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.268 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.268 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.268 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.268 { 00:05:03.268 "name": "Malloc0", 00:05:03.268 "aliases": [ 00:05:03.268 "e85218b5-6faa-4f10-9380-e124c0f5270b" 00:05:03.268 ], 00:05:03.268 "product_name": "Malloc disk", 00:05:03.268 "block_size": 512, 00:05:03.268 "num_blocks": 16384, 00:05:03.268 "uuid": "e85218b5-6faa-4f10-9380-e124c0f5270b", 00:05:03.268 "assigned_rate_limits": { 00:05:03.268 "rw_ios_per_sec": 0, 00:05:03.268 "rw_mbytes_per_sec": 0, 00:05:03.268 "r_mbytes_per_sec": 0, 00:05:03.268 "w_mbytes_per_sec": 0 00:05:03.268 }, 00:05:03.268 "claimed": false, 00:05:03.268 "zoned": false, 00:05:03.268 "supported_io_types": { 00:05:03.268 "read": true, 00:05:03.268 "write": true, 00:05:03.268 "unmap": true, 00:05:03.268 "flush": true, 00:05:03.268 "reset": true, 00:05:03.268 "nvme_admin": false, 00:05:03.268 "nvme_io": false, 00:05:03.268 "nvme_io_md": false, 00:05:03.268 "write_zeroes": true, 00:05:03.268 "zcopy": true, 00:05:03.268 "get_zone_info": false, 00:05:03.268 "zone_management": false, 00:05:03.268 "zone_append": false, 00:05:03.268 "compare": false, 00:05:03.268 "compare_and_write": false, 00:05:03.268 "abort": true, 00:05:03.268 "seek_hole": false, 00:05:03.268 "seek_data": false, 00:05:03.268 "copy": true, 00:05:03.268 "nvme_iov_md": false 00:05:03.268 }, 00:05:03.268 "memory_domains": [ 00:05:03.268 { 00:05:03.268 "dma_device_id": "system", 00:05:03.269 "dma_device_type": 1 00:05:03.269 }, 00:05:03.269 { 00:05:03.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.269 "dma_device_type": 2 00:05:03.269 } 00:05:03.269 ], 00:05:03.269 "driver_specific": {} 00:05:03.269 } 00:05:03.269 ]' 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.269 [2024-07-24 19:43:31.823229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.269 [2024-07-24 19:43:31.823293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.269 [2024-07-24 19:43:31.823321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x242dda0 00:05:03.269 [2024-07-24 19:43:31.823331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.269 [2024-07-24 19:43:31.825042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.269 [2024-07-24 19:43:31.825080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.269 Passthru0 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.269 { 00:05:03.269 "name": "Malloc0", 00:05:03.269 "aliases": [ 00:05:03.269 "e85218b5-6faa-4f10-9380-e124c0f5270b" 00:05:03.269 ], 00:05:03.269 "product_name": "Malloc disk", 00:05:03.269 "block_size": 512, 00:05:03.269 "num_blocks": 16384, 00:05:03.269 "uuid": "e85218b5-6faa-4f10-9380-e124c0f5270b", 00:05:03.269 "assigned_rate_limits": { 00:05:03.269 "rw_ios_per_sec": 0, 00:05:03.269 "rw_mbytes_per_sec": 0, 00:05:03.269 "r_mbytes_per_sec": 0, 00:05:03.269 "w_mbytes_per_sec": 0 00:05:03.269 }, 00:05:03.269 "claimed": true, 00:05:03.269 "claim_type": "exclusive_write", 00:05:03.269 "zoned": false, 00:05:03.269 "supported_io_types": { 00:05:03.269 "read": true, 00:05:03.269 "write": true, 00:05:03.269 "unmap": true, 00:05:03.269 "flush": true, 00:05:03.269 "reset": true, 00:05:03.269 "nvme_admin": false, 00:05:03.269 "nvme_io": false, 00:05:03.269 "nvme_io_md": false, 00:05:03.269 "write_zeroes": true, 00:05:03.269 "zcopy": true, 00:05:03.269 "get_zone_info": false, 00:05:03.269 "zone_management": false, 00:05:03.269 "zone_append": false, 00:05:03.269 "compare": false, 00:05:03.269 "compare_and_write": false, 00:05:03.269 "abort": true, 00:05:03.269 "seek_hole": false, 00:05:03.269 "seek_data": false, 00:05:03.269 "copy": true, 00:05:03.269 "nvme_iov_md": false 00:05:03.269 }, 00:05:03.269 "memory_domains": [ 00:05:03.269 { 00:05:03.269 "dma_device_id": "system", 00:05:03.269 "dma_device_type": 1 00:05:03.269 }, 00:05:03.269 { 00:05:03.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.269 "dma_device_type": 2 00:05:03.269 } 00:05:03.269 ], 00:05:03.269 "driver_specific": {} 00:05:03.269 }, 00:05:03.269 { 00:05:03.269 "name": "Passthru0", 00:05:03.269 "aliases": [ 00:05:03.269 "a11b0141-d8f9-541e-86bd-e6d41ea5a3d8" 00:05:03.269 ], 00:05:03.269 "product_name": "passthru", 00:05:03.269 "block_size": 512, 00:05:03.269 "num_blocks": 16384, 00:05:03.269 "uuid": "a11b0141-d8f9-541e-86bd-e6d41ea5a3d8", 00:05:03.269 "assigned_rate_limits": { 00:05:03.269 "rw_ios_per_sec": 0, 00:05:03.269 "rw_mbytes_per_sec": 0, 00:05:03.269 "r_mbytes_per_sec": 0, 00:05:03.269 "w_mbytes_per_sec": 0 00:05:03.269 }, 00:05:03.269 "claimed": false, 00:05:03.269 "zoned": false, 00:05:03.269 "supported_io_types": { 00:05:03.269 "read": true, 00:05:03.269 "write": true, 00:05:03.269 "unmap": true, 00:05:03.269 "flush": true, 00:05:03.269 "reset": true, 00:05:03.269 "nvme_admin": false, 00:05:03.269 "nvme_io": false, 00:05:03.269 "nvme_io_md": false, 00:05:03.269 "write_zeroes": true, 00:05:03.269 "zcopy": true, 00:05:03.269 "get_zone_info": false, 00:05:03.269 "zone_management": false, 00:05:03.269 "zone_append": false, 00:05:03.269 "compare": false, 00:05:03.269 "compare_and_write": false, 00:05:03.269 "abort": true, 00:05:03.269 "seek_hole": false, 00:05:03.269 "seek_data": false, 00:05:03.269 "copy": true, 00:05:03.269 "nvme_iov_md": false 00:05:03.269 }, 00:05:03.269 "memory_domains": [ 00:05:03.269 { 00:05:03.269 "dma_device_id": "system", 00:05:03.269 "dma_device_type": 1 00:05:03.269 }, 00:05:03.269 { 00:05:03.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.269 "dma_device_type": 2 00:05:03.269 } 00:05:03.269 ], 00:05:03.269 "driver_specific": { 00:05:03.269 "passthru": { 00:05:03.269 "name": "Passthru0", 00:05:03.269 "base_bdev_name": "Malloc0" 00:05:03.269 } 00:05:03.269 } 00:05:03.269 } 00:05:03.269 ]' 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.269 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.269 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.528 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.528 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.528 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.528 19:43:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.528 00:05:03.528 real 0m0.339s 00:05:03.528 user 0m0.220s 00:05:03.528 sys 0m0.041s 00:05:03.528 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.528 19:43:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.528 ************************************ 00:05:03.528 END TEST rpc_integrity 00:05:03.528 ************************************ 00:05:03.528 19:43:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.528 19:43:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.528 19:43:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.528 19:43:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.528 ************************************ 00:05:03.528 START TEST rpc_plugins 00:05:03.528 ************************************ 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.528 { 00:05:03.528 "name": "Malloc1", 00:05:03.528 "aliases": [ 00:05:03.528 "61729aca-ed44-4d04-88ff-ab33ca00d4da" 00:05:03.528 ], 00:05:03.528 "product_name": "Malloc disk", 00:05:03.528 "block_size": 4096, 00:05:03.528 "num_blocks": 256, 00:05:03.528 "uuid": "61729aca-ed44-4d04-88ff-ab33ca00d4da", 00:05:03.528 "assigned_rate_limits": { 00:05:03.528 "rw_ios_per_sec": 0, 00:05:03.528 "rw_mbytes_per_sec": 0, 00:05:03.528 "r_mbytes_per_sec": 0, 00:05:03.528 "w_mbytes_per_sec": 0 00:05:03.528 }, 00:05:03.528 "claimed": false, 00:05:03.528 "zoned": false, 00:05:03.528 "supported_io_types": { 00:05:03.528 "read": true, 00:05:03.528 "write": true, 00:05:03.528 "unmap": true, 00:05:03.528 "flush": true, 00:05:03.528 "reset": true, 00:05:03.528 "nvme_admin": false, 00:05:03.528 "nvme_io": false, 00:05:03.528 "nvme_io_md": false, 00:05:03.528 "write_zeroes": true, 00:05:03.528 "zcopy": true, 00:05:03.528 "get_zone_info": false, 00:05:03.528 "zone_management": false, 00:05:03.528 "zone_append": false, 00:05:03.528 "compare": false, 00:05:03.528 "compare_and_write": false, 00:05:03.528 "abort": true, 00:05:03.528 "seek_hole": false, 00:05:03.528 "seek_data": false, 00:05:03.528 "copy": true, 00:05:03.528 "nvme_iov_md": false 00:05:03.528 }, 00:05:03.528 "memory_domains": [ 00:05:03.528 { 00:05:03.528 "dma_device_id": "system", 00:05:03.528 "dma_device_type": 1 00:05:03.528 }, 00:05:03.528 { 00:05:03.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.528 "dma_device_type": 2 00:05:03.528 } 00:05:03.528 ], 00:05:03.528 "driver_specific": {} 00:05:03.528 } 00:05:03.528 ]' 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.528 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.528 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:03.787 19:43:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.787 00:05:03.787 real 0m0.151s 00:05:03.787 user 0m0.092s 00:05:03.787 sys 0m0.020s 00:05:03.787 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.787 19:43:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:03.787 ************************************ 00:05:03.787 END TEST rpc_plugins 00:05:03.787 ************************************ 00:05:03.787 19:43:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.787 19:43:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.787 19:43:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.787 19:43:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.787 ************************************ 00:05:03.787 START TEST rpc_trace_cmd_test 00:05:03.787 ************************************ 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.787 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58720", 00:05:03.787 "tpoint_group_mask": "0x8", 00:05:03.787 "iscsi_conn": { 00:05:03.787 "mask": "0x2", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "scsi": { 00:05:03.787 "mask": "0x4", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "bdev": { 00:05:03.787 "mask": "0x8", 00:05:03.787 "tpoint_mask": "0xffffffffffffffff" 00:05:03.787 }, 00:05:03.787 "nvmf_rdma": { 00:05:03.787 "mask": "0x10", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "nvmf_tcp": { 00:05:03.787 "mask": "0x20", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "ftl": { 00:05:03.787 "mask": "0x40", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "blobfs": { 00:05:03.787 "mask": "0x80", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "dsa": { 00:05:03.787 "mask": "0x200", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "thread": { 00:05:03.787 "mask": "0x400", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "nvme_pcie": { 00:05:03.787 "mask": "0x800", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "iaa": { 00:05:03.787 "mask": "0x1000", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "nvme_tcp": { 00:05:03.787 "mask": "0x2000", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "bdev_nvme": { 00:05:03.787 "mask": "0x4000", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 }, 00:05:03.787 "sock": { 00:05:03.787 "mask": "0x8000", 00:05:03.787 "tpoint_mask": "0x0" 00:05:03.787 } 00:05:03.787 }' 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.787 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:04.046 19:43:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:04.046 00:05:04.046 real 0m0.221s 00:05:04.046 user 0m0.186s 00:05:04.046 sys 0m0.026s 00:05:04.046 19:43:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.046 19:43:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:04.046 ************************************ 00:05:04.046 END TEST rpc_trace_cmd_test 00:05:04.046 ************************************ 00:05:04.046 19:43:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:04.046 19:43:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:04.046 19:43:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:04.046 19:43:32 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.046 19:43:32 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.046 19:43:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.046 ************************************ 00:05:04.046 START TEST rpc_daemon_integrity 00:05:04.046 ************************************ 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.046 { 00:05:04.046 "name": "Malloc2", 00:05:04.046 "aliases": [ 00:05:04.046 "7e74ccd5-7cec-4aa6-a5b8-4657de0c8b51" 00:05:04.046 ], 00:05:04.046 "product_name": "Malloc disk", 00:05:04.046 "block_size": 512, 00:05:04.046 "num_blocks": 16384, 00:05:04.046 "uuid": "7e74ccd5-7cec-4aa6-a5b8-4657de0c8b51", 00:05:04.046 "assigned_rate_limits": { 00:05:04.046 "rw_ios_per_sec": 0, 00:05:04.046 "rw_mbytes_per_sec": 0, 00:05:04.046 "r_mbytes_per_sec": 0, 00:05:04.046 "w_mbytes_per_sec": 0 00:05:04.046 }, 00:05:04.046 "claimed": false, 00:05:04.046 "zoned": false, 00:05:04.046 "supported_io_types": { 00:05:04.046 "read": true, 00:05:04.046 "write": true, 00:05:04.046 "unmap": true, 00:05:04.046 "flush": true, 00:05:04.046 "reset": true, 00:05:04.046 "nvme_admin": false, 00:05:04.046 "nvme_io": false, 00:05:04.046 "nvme_io_md": false, 00:05:04.046 "write_zeroes": true, 00:05:04.046 "zcopy": true, 00:05:04.046 "get_zone_info": false, 00:05:04.046 "zone_management": false, 00:05:04.046 "zone_append": false, 00:05:04.046 "compare": false, 00:05:04.046 "compare_and_write": false, 00:05:04.046 "abort": true, 00:05:04.046 "seek_hole": false, 00:05:04.046 "seek_data": false, 00:05:04.046 "copy": true, 00:05:04.046 "nvme_iov_md": false 00:05:04.046 }, 00:05:04.046 "memory_domains": [ 00:05:04.046 { 00:05:04.046 "dma_device_id": "system", 00:05:04.046 "dma_device_type": 1 00:05:04.046 }, 00:05:04.046 { 00:05:04.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.046 "dma_device_type": 2 00:05:04.046 } 00:05:04.046 ], 00:05:04.046 "driver_specific": {} 00:05:04.046 } 00:05:04.046 ]' 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.046 [2024-07-24 19:43:32.663769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:04.046 [2024-07-24 19:43:32.663829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.046 [2024-07-24 19:43:32.663852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2492be0 00:05:04.046 [2024-07-24 19:43:32.663861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.046 [2024-07-24 19:43:32.665518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.046 [2024-07-24 19:43:32.665558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.046 Passthru0 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.046 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.046 { 00:05:04.046 "name": "Malloc2", 00:05:04.046 "aliases": [ 00:05:04.046 "7e74ccd5-7cec-4aa6-a5b8-4657de0c8b51" 00:05:04.046 ], 00:05:04.046 "product_name": "Malloc disk", 00:05:04.046 "block_size": 512, 00:05:04.046 "num_blocks": 16384, 00:05:04.046 "uuid": "7e74ccd5-7cec-4aa6-a5b8-4657de0c8b51", 00:05:04.046 "assigned_rate_limits": { 00:05:04.046 "rw_ios_per_sec": 0, 00:05:04.046 "rw_mbytes_per_sec": 0, 00:05:04.046 "r_mbytes_per_sec": 0, 00:05:04.046 "w_mbytes_per_sec": 0 00:05:04.046 }, 00:05:04.046 "claimed": true, 00:05:04.046 "claim_type": "exclusive_write", 00:05:04.046 "zoned": false, 00:05:04.046 "supported_io_types": { 00:05:04.046 "read": true, 00:05:04.046 "write": true, 00:05:04.046 "unmap": true, 00:05:04.046 "flush": true, 00:05:04.046 "reset": true, 00:05:04.046 "nvme_admin": false, 00:05:04.046 "nvme_io": false, 00:05:04.046 "nvme_io_md": false, 00:05:04.046 "write_zeroes": true, 00:05:04.046 "zcopy": true, 00:05:04.046 "get_zone_info": false, 00:05:04.046 "zone_management": false, 00:05:04.046 "zone_append": false, 00:05:04.046 "compare": false, 00:05:04.046 "compare_and_write": false, 00:05:04.046 "abort": true, 00:05:04.046 "seek_hole": false, 00:05:04.046 "seek_data": false, 00:05:04.046 "copy": true, 00:05:04.046 "nvme_iov_md": false 00:05:04.046 }, 00:05:04.046 "memory_domains": [ 00:05:04.046 { 00:05:04.046 "dma_device_id": "system", 00:05:04.046 "dma_device_type": 1 00:05:04.046 }, 00:05:04.046 { 00:05:04.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.046 "dma_device_type": 2 00:05:04.046 } 00:05:04.046 ], 00:05:04.046 "driver_specific": {} 00:05:04.046 }, 00:05:04.046 { 00:05:04.046 "name": "Passthru0", 00:05:04.046 "aliases": [ 00:05:04.046 "29fa2a2d-793e-5e0e-be7e-6c71c199b245" 00:05:04.046 ], 00:05:04.046 "product_name": "passthru", 00:05:04.046 "block_size": 512, 00:05:04.046 "num_blocks": 16384, 00:05:04.046 "uuid": "29fa2a2d-793e-5e0e-be7e-6c71c199b245", 00:05:04.046 "assigned_rate_limits": { 00:05:04.046 "rw_ios_per_sec": 0, 00:05:04.046 "rw_mbytes_per_sec": 0, 00:05:04.046 "r_mbytes_per_sec": 0, 00:05:04.046 "w_mbytes_per_sec": 0 00:05:04.046 }, 00:05:04.046 "claimed": false, 00:05:04.046 "zoned": false, 00:05:04.046 "supported_io_types": { 00:05:04.046 "read": true, 00:05:04.046 "write": true, 00:05:04.046 "unmap": true, 00:05:04.046 "flush": true, 00:05:04.046 "reset": true, 00:05:04.046 "nvme_admin": false, 00:05:04.046 "nvme_io": false, 00:05:04.046 "nvme_io_md": false, 00:05:04.046 "write_zeroes": true, 00:05:04.046 "zcopy": true, 00:05:04.046 "get_zone_info": false, 00:05:04.046 "zone_management": false, 00:05:04.046 "zone_append": false, 00:05:04.046 "compare": false, 00:05:04.046 "compare_and_write": false, 00:05:04.046 "abort": true, 00:05:04.046 "seek_hole": false, 00:05:04.046 "seek_data": false, 00:05:04.046 "copy": true, 00:05:04.046 "nvme_iov_md": false 00:05:04.046 }, 00:05:04.046 "memory_domains": [ 00:05:04.046 { 00:05:04.046 "dma_device_id": "system", 00:05:04.046 "dma_device_type": 1 00:05:04.046 }, 00:05:04.046 { 00:05:04.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.046 "dma_device_type": 2 00:05:04.046 } 00:05:04.046 ], 00:05:04.046 "driver_specific": { 00:05:04.046 "passthru": { 00:05:04.046 "name": "Passthru0", 00:05:04.046 "base_bdev_name": "Malloc2" 00:05:04.046 } 00:05:04.046 } 00:05:04.046 } 00:05:04.046 ]' 00:05:04.047 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.305 00:05:04.305 real 0m0.324s 00:05:04.305 user 0m0.204s 00:05:04.305 sys 0m0.047s 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.305 19:43:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:04.306 ************************************ 00:05:04.306 END TEST rpc_daemon_integrity 00:05:04.306 ************************************ 00:05:04.306 19:43:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.306 19:43:32 rpc -- rpc/rpc.sh@84 -- # killprocess 58720 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@950 -- # '[' -z 58720 ']' 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@954 -- # kill -0 58720 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@955 -- # uname 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58720 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:04.306 killing process with pid 58720 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58720' 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@969 -- # kill 58720 00:05:04.306 19:43:32 rpc -- common/autotest_common.sh@974 -- # wait 58720 00:05:04.872 00:05:04.872 real 0m2.831s 00:05:04.872 user 0m3.649s 00:05:04.872 sys 0m0.674s 00:05:04.872 19:43:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:04.872 ************************************ 00:05:04.872 END TEST rpc 00:05:04.872 ************************************ 00:05:04.872 19:43:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.872 19:43:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.872 19:43:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.872 19:43:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.872 19:43:33 -- common/autotest_common.sh@10 -- # set +x 00:05:04.872 ************************************ 00:05:04.872 START TEST skip_rpc 00:05:04.872 ************************************ 00:05:04.872 19:43:33 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.872 * Looking for test storage... 00:05:04.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.872 19:43:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.872 19:43:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.872 19:43:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.872 19:43:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:04.872 19:43:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:04.872 19:43:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.872 ************************************ 00:05:04.872 START TEST skip_rpc 00:05:04.872 ************************************ 00:05:04.872 19:43:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:04.872 19:43:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58918 00:05:04.873 19:43:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.873 19:43:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.873 19:43:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.873 [2024-07-24 19:43:33.500548] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:04.873 [2024-07-24 19:43:33.500681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58918 ] 00:05:05.131 [2024-07-24 19:43:33.639398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.131 [2024-07-24 19:43:33.758131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.387 [2024-07-24 19:43:33.812009] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58918 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 58918 ']' 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 58918 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58918 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.648 killing process with pid 58918 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58918' 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 58918 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 58918 00:05:10.648 00:05:10.648 real 0m5.420s 00:05:10.648 user 0m5.029s 00:05:10.648 sys 0m0.275s 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.648 ************************************ 00:05:10.648 END TEST skip_rpc 00:05:10.648 19:43:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.648 ************************************ 00:05:10.648 19:43:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.648 19:43:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.648 19:43:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.648 19:43:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.648 ************************************ 00:05:10.648 START TEST skip_rpc_with_json 00:05:10.648 ************************************ 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58999 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58999 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58999 ']' 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.648 19:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.649 [2024-07-24 19:43:38.943476] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:10.649 [2024-07-24 19:43:38.943576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58999 ] 00:05:10.649 [2024-07-24 19:43:39.075715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.649 [2024-07-24 19:43:39.194492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.649 [2024-07-24 19:43:39.247442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:11.214 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.214 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:11.214 19:43:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.214 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.215 [2024-07-24 19:43:39.861019] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.215 request: 00:05:11.215 { 00:05:11.215 "trtype": "tcp", 00:05:11.215 "method": "nvmf_get_transports", 00:05:11.215 "req_id": 1 00:05:11.215 } 00:05:11.215 Got JSON-RPC error response 00:05:11.215 response: 00:05:11.215 { 00:05:11.215 "code": -19, 00:05:11.215 "message": "No such device" 00:05:11.215 } 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.215 [2024-07-24 19:43:39.869131] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.215 19:43:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.473 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.473 19:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.473 { 00:05:11.473 "subsystems": [ 00:05:11.473 { 00:05:11.473 "subsystem": "keyring", 00:05:11.473 "config": [] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "iobuf", 00:05:11.473 "config": [ 00:05:11.473 { 00:05:11.473 "method": "iobuf_set_options", 00:05:11.473 "params": { 00:05:11.473 "small_pool_count": 8192, 00:05:11.473 "large_pool_count": 1024, 00:05:11.473 "small_bufsize": 8192, 00:05:11.473 "large_bufsize": 135168 00:05:11.473 } 00:05:11.473 } 00:05:11.473 ] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "sock", 00:05:11.473 "config": [ 00:05:11.473 { 00:05:11.473 "method": "sock_set_default_impl", 00:05:11.473 "params": { 00:05:11.473 "impl_name": "uring" 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "sock_impl_set_options", 00:05:11.473 "params": { 00:05:11.473 "impl_name": "ssl", 00:05:11.473 "recv_buf_size": 4096, 00:05:11.473 "send_buf_size": 4096, 00:05:11.473 "enable_recv_pipe": true, 00:05:11.473 "enable_quickack": false, 00:05:11.473 "enable_placement_id": 0, 00:05:11.473 "enable_zerocopy_send_server": true, 00:05:11.473 "enable_zerocopy_send_client": false, 00:05:11.473 "zerocopy_threshold": 0, 00:05:11.473 "tls_version": 0, 00:05:11.473 "enable_ktls": false 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "sock_impl_set_options", 00:05:11.473 "params": { 00:05:11.473 "impl_name": "posix", 00:05:11.473 "recv_buf_size": 2097152, 00:05:11.473 "send_buf_size": 2097152, 00:05:11.473 "enable_recv_pipe": true, 00:05:11.473 "enable_quickack": false, 00:05:11.473 "enable_placement_id": 0, 00:05:11.473 "enable_zerocopy_send_server": true, 00:05:11.473 "enable_zerocopy_send_client": false, 00:05:11.473 "zerocopy_threshold": 0, 00:05:11.473 "tls_version": 0, 00:05:11.473 "enable_ktls": false 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "sock_impl_set_options", 00:05:11.473 "params": { 00:05:11.473 "impl_name": "uring", 00:05:11.473 "recv_buf_size": 2097152, 00:05:11.473 "send_buf_size": 2097152, 00:05:11.473 "enable_recv_pipe": true, 00:05:11.473 "enable_quickack": false, 00:05:11.473 "enable_placement_id": 0, 00:05:11.473 "enable_zerocopy_send_server": false, 00:05:11.473 "enable_zerocopy_send_client": false, 00:05:11.473 "zerocopy_threshold": 0, 00:05:11.473 "tls_version": 0, 00:05:11.473 "enable_ktls": false 00:05:11.473 } 00:05:11.473 } 00:05:11.473 ] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "vmd", 00:05:11.473 "config": [] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "accel", 00:05:11.473 "config": [ 00:05:11.473 { 00:05:11.473 "method": "accel_set_options", 00:05:11.473 "params": { 00:05:11.473 "small_cache_size": 128, 00:05:11.473 "large_cache_size": 16, 00:05:11.473 "task_count": 2048, 00:05:11.473 "sequence_count": 2048, 00:05:11.473 "buf_count": 2048 00:05:11.473 } 00:05:11.473 } 00:05:11.473 ] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "bdev", 00:05:11.473 "config": [ 00:05:11.473 { 00:05:11.473 "method": "bdev_set_options", 00:05:11.473 "params": { 00:05:11.473 "bdev_io_pool_size": 65535, 00:05:11.473 "bdev_io_cache_size": 256, 00:05:11.473 "bdev_auto_examine": true, 00:05:11.473 "iobuf_small_cache_size": 128, 00:05:11.473 "iobuf_large_cache_size": 16 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "bdev_raid_set_options", 00:05:11.473 "params": { 00:05:11.473 "process_window_size_kb": 1024, 00:05:11.473 "process_max_bandwidth_mb_sec": 0 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "bdev_iscsi_set_options", 00:05:11.473 "params": { 00:05:11.473 "timeout_sec": 30 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "bdev_nvme_set_options", 00:05:11.473 "params": { 00:05:11.473 "action_on_timeout": "none", 00:05:11.473 "timeout_us": 0, 00:05:11.473 "timeout_admin_us": 0, 00:05:11.473 "keep_alive_timeout_ms": 10000, 00:05:11.473 "arbitration_burst": 0, 00:05:11.473 "low_priority_weight": 0, 00:05:11.473 "medium_priority_weight": 0, 00:05:11.473 "high_priority_weight": 0, 00:05:11.473 "nvme_adminq_poll_period_us": 10000, 00:05:11.473 "nvme_ioq_poll_period_us": 0, 00:05:11.473 "io_queue_requests": 0, 00:05:11.473 "delay_cmd_submit": true, 00:05:11.473 "transport_retry_count": 4, 00:05:11.473 "bdev_retry_count": 3, 00:05:11.473 "transport_ack_timeout": 0, 00:05:11.473 "ctrlr_loss_timeout_sec": 0, 00:05:11.473 "reconnect_delay_sec": 0, 00:05:11.473 "fast_io_fail_timeout_sec": 0, 00:05:11.473 "disable_auto_failback": false, 00:05:11.473 "generate_uuids": false, 00:05:11.473 "transport_tos": 0, 00:05:11.473 "nvme_error_stat": false, 00:05:11.473 "rdma_srq_size": 0, 00:05:11.473 "io_path_stat": false, 00:05:11.473 "allow_accel_sequence": false, 00:05:11.473 "rdma_max_cq_size": 0, 00:05:11.473 "rdma_cm_event_timeout_ms": 0, 00:05:11.473 "dhchap_digests": [ 00:05:11.473 "sha256", 00:05:11.473 "sha384", 00:05:11.473 "sha512" 00:05:11.473 ], 00:05:11.473 "dhchap_dhgroups": [ 00:05:11.473 "null", 00:05:11.473 "ffdhe2048", 00:05:11.473 "ffdhe3072", 00:05:11.473 "ffdhe4096", 00:05:11.473 "ffdhe6144", 00:05:11.473 "ffdhe8192" 00:05:11.473 ] 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "bdev_nvme_set_hotplug", 00:05:11.473 "params": { 00:05:11.473 "period_us": 100000, 00:05:11.473 "enable": false 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "bdev_wait_for_examine" 00:05:11.473 } 00:05:11.473 ] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "scsi", 00:05:11.473 "config": null 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "scheduler", 00:05:11.473 "config": [ 00:05:11.473 { 00:05:11.473 "method": "framework_set_scheduler", 00:05:11.473 "params": { 00:05:11.473 "name": "static" 00:05:11.473 } 00:05:11.473 } 00:05:11.473 ] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "vhost_scsi", 00:05:11.473 "config": [] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "vhost_blk", 00:05:11.473 "config": [] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "ublk", 00:05:11.473 "config": [] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "nbd", 00:05:11.473 "config": [] 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "subsystem": "nvmf", 00:05:11.473 "config": [ 00:05:11.473 { 00:05:11.473 "method": "nvmf_set_config", 00:05:11.473 "params": { 00:05:11.473 "discovery_filter": "match_any", 00:05:11.473 "admin_cmd_passthru": { 00:05:11.473 "identify_ctrlr": false 00:05:11.473 } 00:05:11.473 } 00:05:11.473 }, 00:05:11.473 { 00:05:11.473 "method": "nvmf_set_max_subsystems", 00:05:11.473 "params": { 00:05:11.473 "max_subsystems": 1024 00:05:11.473 } 00:05:11.473 }, 00:05:11.474 { 00:05:11.474 "method": "nvmf_set_crdt", 00:05:11.474 "params": { 00:05:11.474 "crdt1": 0, 00:05:11.474 "crdt2": 0, 00:05:11.474 "crdt3": 0 00:05:11.474 } 00:05:11.474 }, 00:05:11.474 { 00:05:11.474 "method": "nvmf_create_transport", 00:05:11.474 "params": { 00:05:11.474 "trtype": "TCP", 00:05:11.474 "max_queue_depth": 128, 00:05:11.474 "max_io_qpairs_per_ctrlr": 127, 00:05:11.474 "in_capsule_data_size": 4096, 00:05:11.474 "max_io_size": 131072, 00:05:11.474 "io_unit_size": 131072, 00:05:11.474 "max_aq_depth": 128, 00:05:11.474 "num_shared_buffers": 511, 00:05:11.474 "buf_cache_size": 4294967295, 00:05:11.474 "dif_insert_or_strip": false, 00:05:11.474 "zcopy": false, 00:05:11.474 "c2h_success": true, 00:05:11.474 "sock_priority": 0, 00:05:11.474 "abort_timeout_sec": 1, 00:05:11.474 "ack_timeout": 0, 00:05:11.474 "data_wr_pool_size": 0 00:05:11.474 } 00:05:11.474 } 00:05:11.474 ] 00:05:11.474 }, 00:05:11.474 { 00:05:11.474 "subsystem": "iscsi", 00:05:11.474 "config": [ 00:05:11.474 { 00:05:11.474 "method": "iscsi_set_options", 00:05:11.474 "params": { 00:05:11.474 "node_base": "iqn.2016-06.io.spdk", 00:05:11.474 "max_sessions": 128, 00:05:11.474 "max_connections_per_session": 2, 00:05:11.474 "max_queue_depth": 64, 00:05:11.474 "default_time2wait": 2, 00:05:11.474 "default_time2retain": 20, 00:05:11.474 "first_burst_length": 8192, 00:05:11.474 "immediate_data": true, 00:05:11.474 "allow_duplicated_isid": false, 00:05:11.474 "error_recovery_level": 0, 00:05:11.474 "nop_timeout": 60, 00:05:11.474 "nop_in_interval": 30, 00:05:11.474 "disable_chap": false, 00:05:11.474 "require_chap": false, 00:05:11.474 "mutual_chap": false, 00:05:11.474 "chap_group": 0, 00:05:11.474 "max_large_datain_per_connection": 64, 00:05:11.474 "max_r2t_per_connection": 4, 00:05:11.474 "pdu_pool_size": 36864, 00:05:11.474 "immediate_data_pool_size": 16384, 00:05:11.474 "data_out_pool_size": 2048 00:05:11.474 } 00:05:11.474 } 00:05:11.474 ] 00:05:11.474 } 00:05:11.474 ] 00:05:11.474 } 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58999 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58999 ']' 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58999 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58999 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.474 killing process with pid 58999 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58999' 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58999 00:05:11.474 19:43:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58999 00:05:12.048 19:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59032 00:05:12.048 19:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:12.048 19:43:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59032 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 59032 ']' 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 59032 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59032 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.340 killing process with pid 59032 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59032' 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 59032 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 59032 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.340 19:43:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.340 00:05:17.340 real 0m6.995s 00:05:17.340 user 0m6.675s 00:05:17.341 sys 0m0.637s 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.341 ************************************ 00:05:17.341 END TEST skip_rpc_with_json 00:05:17.341 ************************************ 00:05:17.341 19:43:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:17.341 19:43:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.341 19:43:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.341 19:43:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.341 ************************************ 00:05:17.341 START TEST skip_rpc_with_delay 00:05:17.341 ************************************ 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:17.341 19:43:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:17.341 [2024-07-24 19:43:45.999187] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.341 [2024-07-24 19:43:45.999379] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:17.600 19:43:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:17.600 19:43:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:17.600 19:43:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:17.600 19:43:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:17.600 00:05:17.600 real 0m0.111s 00:05:17.600 user 0m0.065s 00:05:17.600 sys 0m0.044s 00:05:17.600 19:43:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.600 19:43:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:17.600 ************************************ 00:05:17.600 END TEST skip_rpc_with_delay 00:05:17.600 ************************************ 00:05:17.600 19:43:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.600 19:43:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.600 19:43:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.600 19:43:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.600 19:43:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.600 19:43:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.600 ************************************ 00:05:17.600 START TEST exit_on_failed_rpc_init 00:05:17.600 ************************************ 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59136 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59136 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 59136 ']' 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.600 19:43:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.600 [2024-07-24 19:43:46.149766] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:17.600 [2024-07-24 19:43:46.149897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59136 ] 00:05:17.858 [2024-07-24 19:43:46.287658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.858 [2024-07-24 19:43:46.420646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.858 [2024-07-24 19:43:46.473161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:18.794 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.794 [2024-07-24 19:43:47.160169] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:18.794 [2024-07-24 19:43:47.160258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59154 ] 00:05:18.794 [2024-07-24 19:43:47.293834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.794 [2024-07-24 19:43:47.443997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.794 [2024-07-24 19:43:47.444095] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.794 [2024-07-24 19:43:47.444110] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.794 [2024-07-24 19:43:47.444119] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59136 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 59136 ']' 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 59136 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59136 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.053 killing process with pid 59136 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59136' 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 59136 00:05:19.053 19:43:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 59136 00:05:19.620 00:05:19.620 real 0m1.934s 00:05:19.620 user 0m2.295s 00:05:19.620 sys 0m0.428s 00:05:19.620 19:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.620 19:43:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.620 ************************************ 00:05:19.620 END TEST exit_on_failed_rpc_init 00:05:19.620 ************************************ 00:05:19.620 19:43:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:19.620 00:05:19.620 real 0m14.705s 00:05:19.620 user 0m14.145s 00:05:19.620 sys 0m1.541s 00:05:19.620 19:43:48 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.620 ************************************ 00:05:19.620 END TEST skip_rpc 00:05:19.620 ************************************ 00:05:19.620 19:43:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.620 19:43:48 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.620 19:43:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.620 19:43:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.620 19:43:48 -- common/autotest_common.sh@10 -- # set +x 00:05:19.620 ************************************ 00:05:19.620 START TEST rpc_client 00:05:19.620 ************************************ 00:05:19.620 19:43:48 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.620 * Looking for test storage... 00:05:19.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:19.620 19:43:48 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:19.620 OK 00:05:19.620 19:43:48 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.620 00:05:19.620 real 0m0.095s 00:05:19.620 user 0m0.046s 00:05:19.620 sys 0m0.053s 00:05:19.620 19:43:48 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.620 19:43:48 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:19.620 ************************************ 00:05:19.620 END TEST rpc_client 00:05:19.620 ************************************ 00:05:19.620 19:43:48 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.620 19:43:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.620 19:43:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.620 19:43:48 -- common/autotest_common.sh@10 -- # set +x 00:05:19.620 ************************************ 00:05:19.620 START TEST json_config 00:05:19.620 ************************************ 00:05:19.620 19:43:48 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.620 19:43:48 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.620 19:43:48 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.878 19:43:48 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:05:19.878 19:43:48 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:05:19.878 19:43:48 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.878 19:43:48 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.878 19:43:48 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.878 19:43:48 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.878 19:43:48 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.878 19:43:48 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.878 19:43:48 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.878 19:43:48 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.879 19:43:48 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.879 19:43:48 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.879 19:43:48 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.879 19:43:48 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.879 19:43:48 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@47 -- # : 0 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:19.879 19:43:48 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.879 INFO: JSON configuration test init 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.879 19:43:48 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:19.879 19:43:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.879 19:43:48 json_config -- json_config/common.sh@10 -- # shift 00:05:19.879 19:43:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.879 19:43:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.879 19:43:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.879 19:43:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.879 19:43:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.879 Waiting for target to run... 00:05:19.879 19:43:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59282 00:05:19.879 19:43:48 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:19.879 19:43:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.879 19:43:48 json_config -- json_config/common.sh@25 -- # waitforlisten 59282 /var/tmp/spdk_tgt.sock 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@831 -- # '[' -z 59282 ']' 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.879 19:43:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.879 [2024-07-24 19:43:48.376108] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:19.879 [2024-07-24 19:43:48.376225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59282 ] 00:05:20.445 [2024-07-24 19:43:48.913163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.445 [2024-07-24 19:43:49.004378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.717 00:05:20.717 19:43:49 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.717 19:43:49 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:20.717 19:43:49 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.717 19:43:49 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:20.717 19:43:49 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:20.717 19:43:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:20.717 19:43:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.717 19:43:49 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:20.717 19:43:49 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:20.717 19:43:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:20.717 19:43:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.985 19:43:49 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:20.985 19:43:49 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:20.985 19:43:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:21.244 [2024-07-24 19:43:49.728871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:21.502 19:43:49 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:21.502 19:43:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:21.502 19:43:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.502 19:43:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.502 19:43:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:21.502 19:43:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:21.502 19:43:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:21.502 19:43:49 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:21.502 19:43:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:21.502 19:43:49 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@51 -- # sort 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:21.760 19:43:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:21.760 19:43:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:21.760 19:43:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:21.760 19:43:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:21.760 19:43:50 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:21.760 19:43:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:22.018 MallocForNvmf0 00:05:22.018 19:43:50 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:22.018 19:43:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:22.276 MallocForNvmf1 00:05:22.276 19:43:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:22.276 19:43:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:22.534 [2024-07-24 19:43:51.020637] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.534 19:43:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:22.534 19:43:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:22.792 19:43:51 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:22.792 19:43:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:23.050 19:43:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:23.050 19:43:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:23.308 19:43:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:23.308 19:43:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:23.566 [2024-07-24 19:43:51.989157] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.566 19:43:52 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:23.566 19:43:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.566 19:43:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.566 19:43:52 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:23.566 19:43:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.566 19:43:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.566 19:43:52 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:23.566 19:43:52 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:23.566 19:43:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:23.824 MallocBdevForConfigChangeCheck 00:05:23.824 19:43:52 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:23.824 19:43:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.824 19:43:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.824 19:43:52 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:23.824 19:43:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.391 INFO: shutting down applications... 00:05:24.391 19:43:52 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:24.391 19:43:52 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:24.391 19:43:52 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:24.391 19:43:52 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:24.391 19:43:52 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:24.768 Calling clear_iscsi_subsystem 00:05:24.768 Calling clear_nvmf_subsystem 00:05:24.768 Calling clear_nbd_subsystem 00:05:24.768 Calling clear_ublk_subsystem 00:05:24.768 Calling clear_vhost_blk_subsystem 00:05:24.768 Calling clear_vhost_scsi_subsystem 00:05:24.768 Calling clear_bdev_subsystem 00:05:24.768 19:43:53 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:24.768 19:43:53 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:24.768 19:43:53 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:24.768 19:43:53 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.768 19:43:53 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:24.768 19:43:53 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:25.025 19:43:53 json_config -- json_config/json_config.sh@349 -- # break 00:05:25.025 19:43:53 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:25.025 19:43:53 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:25.025 19:43:53 json_config -- json_config/common.sh@31 -- # local app=target 00:05:25.025 19:43:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.025 19:43:53 json_config -- json_config/common.sh@35 -- # [[ -n 59282 ]] 00:05:25.025 19:43:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59282 00:05:25.025 19:43:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.025 19:43:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.025 19:43:53 json_config -- json_config/common.sh@41 -- # kill -0 59282 00:05:25.025 19:43:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.592 19:43:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.592 19:43:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.592 19:43:53 json_config -- json_config/common.sh@41 -- # kill -0 59282 00:05:25.592 19:43:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:25.592 19:43:53 json_config -- json_config/common.sh@43 -- # break 00:05:25.592 19:43:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:25.592 SPDK target shutdown done 00:05:25.592 19:43:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:25.592 INFO: relaunching applications... 00:05:25.592 19:43:53 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:25.592 19:43:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:25.592 19:43:53 json_config -- json_config/common.sh@9 -- # local app=target 00:05:25.592 19:43:53 json_config -- json_config/common.sh@10 -- # shift 00:05:25.592 19:43:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:25.592 19:43:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:25.592 19:43:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:25.592 19:43:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.592 19:43:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:25.592 19:43:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59473 00:05:25.592 Waiting for target to run... 00:05:25.592 19:43:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:25.592 19:43:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:25.592 19:43:53 json_config -- json_config/common.sh@25 -- # waitforlisten 59473 /var/tmp/spdk_tgt.sock 00:05:25.592 19:43:53 json_config -- common/autotest_common.sh@831 -- # '[' -z 59473 ']' 00:05:25.592 19:43:53 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.592 19:43:53 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.592 19:43:53 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.592 19:43:53 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.592 19:43:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.592 [2024-07-24 19:43:54.060362] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:25.592 [2024-07-24 19:43:54.060513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59473 ] 00:05:25.851 [2024-07-24 19:43:54.484026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.109 [2024-07-24 19:43:54.579288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.109 [2024-07-24 19:43:54.705541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.368 [2024-07-24 19:43:54.916506] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.368 [2024-07-24 19:43:54.948598] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:26.627 19:43:55 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.627 19:43:55 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:26.627 00:05:26.627 19:43:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:26.627 19:43:55 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:26.627 INFO: Checking if target configuration is the same... 00:05:26.627 19:43:55 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:26.627 19:43:55 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:26.627 19:43:55 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:26.627 19:43:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.627 + '[' 2 -ne 2 ']' 00:05:26.627 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:26.627 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:26.627 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:26.627 +++ basename /dev/fd/62 00:05:26.627 ++ mktemp /tmp/62.XXX 00:05:26.627 + tmp_file_1=/tmp/62.bOl 00:05:26.627 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:26.627 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:26.627 + tmp_file_2=/tmp/spdk_tgt_config.json.sWv 00:05:26.627 + ret=0 00:05:26.627 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:26.885 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:27.143 + diff -u /tmp/62.bOl /tmp/spdk_tgt_config.json.sWv 00:05:27.143 + echo 'INFO: JSON config files are the same' 00:05:27.143 INFO: JSON config files are the same 00:05:27.143 + rm /tmp/62.bOl /tmp/spdk_tgt_config.json.sWv 00:05:27.143 + exit 0 00:05:27.143 19:43:55 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:27.143 INFO: changing configuration and checking if this can be detected... 00:05:27.143 19:43:55 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:27.143 19:43:55 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.143 19:43:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.401 19:43:55 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:27.401 19:43:55 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:27.401 19:43:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.401 + '[' 2 -ne 2 ']' 00:05:27.401 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:27.401 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:27.401 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:27.401 +++ basename /dev/fd/62 00:05:27.401 ++ mktemp /tmp/62.XXX 00:05:27.401 + tmp_file_1=/tmp/62.WTb 00:05:27.401 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:27.401 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.401 + tmp_file_2=/tmp/spdk_tgt_config.json.LG3 00:05:27.401 + ret=0 00:05:27.401 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:27.659 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:27.919 + diff -u /tmp/62.WTb /tmp/spdk_tgt_config.json.LG3 00:05:27.919 + ret=1 00:05:27.919 + echo '=== Start of file: /tmp/62.WTb ===' 00:05:27.919 + cat /tmp/62.WTb 00:05:27.919 + echo '=== End of file: /tmp/62.WTb ===' 00:05:27.919 + echo '' 00:05:27.919 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LG3 ===' 00:05:27.919 + cat /tmp/spdk_tgt_config.json.LG3 00:05:27.919 + echo '=== End of file: /tmp/spdk_tgt_config.json.LG3 ===' 00:05:27.919 + echo '' 00:05:27.919 + rm /tmp/62.WTb /tmp/spdk_tgt_config.json.LG3 00:05:27.919 + exit 1 00:05:27.919 INFO: configuration change detected. 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@321 -- # [[ -n 59473 ]] 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.919 19:43:56 json_config -- json_config/json_config.sh@327 -- # killprocess 59473 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@950 -- # '[' -z 59473 ']' 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@954 -- # kill -0 59473 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@955 -- # uname 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59473 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:27.919 killing process with pid 59473 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59473' 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@969 -- # kill 59473 00:05:27.919 19:43:56 json_config -- common/autotest_common.sh@974 -- # wait 59473 00:05:28.178 19:43:56 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:28.178 19:43:56 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:28.178 19:43:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:28.178 19:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.178 19:43:56 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:28.178 19:43:56 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:28.178 INFO: Success 00:05:28.178 00:05:28.178 real 0m8.488s 00:05:28.178 user 0m12.138s 00:05:28.178 sys 0m1.815s 00:05:28.178 19:43:56 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.178 19:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.178 ************************************ 00:05:28.178 END TEST json_config 00:05:28.178 ************************************ 00:05:28.178 19:43:56 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.178 19:43:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.178 19:43:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.178 19:43:56 -- common/autotest_common.sh@10 -- # set +x 00:05:28.178 ************************************ 00:05:28.178 START TEST json_config_extra_key 00:05:28.178 ************************************ 00:05:28.178 19:43:56 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.178 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.178 19:43:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.179 19:43:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.179 19:43:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.179 19:43:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.179 19:43:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.179 19:43:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.179 19:43:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.179 19:43:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:28.179 19:43:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.179 19:43:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.179 INFO: launching applications... 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:28.179 19:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59619 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.179 Waiting for target to run... 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59619 /var/tmp/spdk_tgt.sock 00:05:28.179 19:43:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:28.179 19:43:56 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 59619 ']' 00:05:28.179 19:43:56 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.179 19:43:56 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.179 19:43:56 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.179 19:43:56 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.179 19:43:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.438 [2024-07-24 19:43:56.908732] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:28.438 [2024-07-24 19:43:56.908850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59619 ] 00:05:28.696 [2024-07-24 19:43:57.333966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.956 [2024-07-24 19:43:57.436035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.956 [2024-07-24 19:43:57.458031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:29.522 19:43:57 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.522 19:43:57 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:29.522 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:29.522 INFO: shutting down applications... 00:05:29.522 19:43:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:29.522 19:43:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59619 ]] 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59619 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59619 00:05:29.522 19:43:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.781 19:43:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.781 19:43:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.781 19:43:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59619 00:05:29.781 19:43:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.781 19:43:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:29.781 19:43:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.781 SPDK target shutdown done 00:05:29.781 19:43:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.781 Success 00:05:29.781 19:43:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:29.781 00:05:29.781 real 0m1.687s 00:05:29.781 user 0m1.623s 00:05:29.781 sys 0m0.443s 00:05:29.781 19:43:58 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.781 19:43:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:29.781 ************************************ 00:05:29.781 END TEST json_config_extra_key 00:05:29.781 ************************************ 00:05:30.039 19:43:58 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.039 19:43:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.039 19:43:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.039 19:43:58 -- common/autotest_common.sh@10 -- # set +x 00:05:30.039 ************************************ 00:05:30.039 START TEST alias_rpc 00:05:30.039 ************************************ 00:05:30.039 19:43:58 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.039 * Looking for test storage... 00:05:30.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:30.039 19:43:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.039 19:43:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59684 00:05:30.039 19:43:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59684 00:05:30.039 19:43:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.039 19:43:58 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 59684 ']' 00:05:30.039 19:43:58 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.039 19:43:58 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.039 19:43:58 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.039 19:43:58 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.039 19:43:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.039 [2024-07-24 19:43:58.626102] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:30.039 [2024-07-24 19:43:58.626194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:05:30.298 [2024-07-24 19:43:58.761362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.298 [2024-07-24 19:43:58.881537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.298 [2024-07-24 19:43:58.934473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.233 19:43:59 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.233 19:43:59 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:31.233 19:43:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:31.491 19:43:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59684 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 59684 ']' 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 59684 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59684 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59684' 00:05:31.491 killing process with pid 59684 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@969 -- # kill 59684 00:05:31.491 19:43:59 alias_rpc -- common/autotest_common.sh@974 -- # wait 59684 00:05:31.749 00:05:31.749 real 0m1.824s 00:05:31.749 user 0m2.090s 00:05:31.749 sys 0m0.435s 00:05:31.749 ************************************ 00:05:31.749 END TEST alias_rpc 00:05:31.749 19:44:00 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.749 19:44:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.749 ************************************ 00:05:31.749 19:44:00 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:31.749 19:44:00 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:31.749 19:44:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.749 19:44:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.749 19:44:00 -- common/autotest_common.sh@10 -- # set +x 00:05:31.749 ************************************ 00:05:31.749 START TEST spdkcli_tcp 00:05:31.749 ************************************ 00:05:31.749 19:44:00 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:32.008 * Looking for test storage... 00:05:32.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:32.008 19:44:00 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.008 19:44:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59760 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:32.008 19:44:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59760 00:05:32.008 19:44:00 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 59760 ']' 00:05:32.008 19:44:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.008 19:44:00 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.008 19:44:00 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.008 19:44:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.008 19:44:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.008 [2024-07-24 19:44:00.514591] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:32.008 [2024-07-24 19:44:00.514698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59760 ] 00:05:32.008 [2024-07-24 19:44:00.655438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.266 [2024-07-24 19:44:00.788041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.266 [2024-07-24 19:44:00.788056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.266 [2024-07-24 19:44:00.845488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.835 19:44:01 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.835 19:44:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:32.835 19:44:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:32.835 19:44:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59777 00:05:32.835 19:44:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:33.094 [ 00:05:33.094 "bdev_malloc_delete", 00:05:33.094 "bdev_malloc_create", 00:05:33.094 "bdev_null_resize", 00:05:33.094 "bdev_null_delete", 00:05:33.094 "bdev_null_create", 00:05:33.094 "bdev_nvme_cuse_unregister", 00:05:33.094 "bdev_nvme_cuse_register", 00:05:33.094 "bdev_opal_new_user", 00:05:33.094 "bdev_opal_set_lock_state", 00:05:33.094 "bdev_opal_delete", 00:05:33.094 "bdev_opal_get_info", 00:05:33.094 "bdev_opal_create", 00:05:33.094 "bdev_nvme_opal_revert", 00:05:33.094 "bdev_nvme_opal_init", 00:05:33.094 "bdev_nvme_send_cmd", 00:05:33.094 "bdev_nvme_get_path_iostat", 00:05:33.094 "bdev_nvme_get_mdns_discovery_info", 00:05:33.094 "bdev_nvme_stop_mdns_discovery", 00:05:33.094 "bdev_nvme_start_mdns_discovery", 00:05:33.094 "bdev_nvme_set_multipath_policy", 00:05:33.094 "bdev_nvme_set_preferred_path", 00:05:33.094 "bdev_nvme_get_io_paths", 00:05:33.094 "bdev_nvme_remove_error_injection", 00:05:33.094 "bdev_nvme_add_error_injection", 00:05:33.094 "bdev_nvme_get_discovery_info", 00:05:33.094 "bdev_nvme_stop_discovery", 00:05:33.094 "bdev_nvme_start_discovery", 00:05:33.094 "bdev_nvme_get_controller_health_info", 00:05:33.094 "bdev_nvme_disable_controller", 00:05:33.094 "bdev_nvme_enable_controller", 00:05:33.094 "bdev_nvme_reset_controller", 00:05:33.094 "bdev_nvme_get_transport_statistics", 00:05:33.094 "bdev_nvme_apply_firmware", 00:05:33.094 "bdev_nvme_detach_controller", 00:05:33.094 "bdev_nvme_get_controllers", 00:05:33.094 "bdev_nvme_attach_controller", 00:05:33.094 "bdev_nvme_set_hotplug", 00:05:33.094 "bdev_nvme_set_options", 00:05:33.094 "bdev_passthru_delete", 00:05:33.094 "bdev_passthru_create", 00:05:33.094 "bdev_lvol_set_parent_bdev", 00:05:33.094 "bdev_lvol_set_parent", 00:05:33.094 "bdev_lvol_check_shallow_copy", 00:05:33.094 "bdev_lvol_start_shallow_copy", 00:05:33.094 "bdev_lvol_grow_lvstore", 00:05:33.094 "bdev_lvol_get_lvols", 00:05:33.094 "bdev_lvol_get_lvstores", 00:05:33.094 "bdev_lvol_delete", 00:05:33.094 "bdev_lvol_set_read_only", 00:05:33.094 "bdev_lvol_resize", 00:05:33.094 "bdev_lvol_decouple_parent", 00:05:33.094 "bdev_lvol_inflate", 00:05:33.094 "bdev_lvol_rename", 00:05:33.094 "bdev_lvol_clone_bdev", 00:05:33.094 "bdev_lvol_clone", 00:05:33.094 "bdev_lvol_snapshot", 00:05:33.094 "bdev_lvol_create", 00:05:33.094 "bdev_lvol_delete_lvstore", 00:05:33.094 "bdev_lvol_rename_lvstore", 00:05:33.094 "bdev_lvol_create_lvstore", 00:05:33.094 "bdev_raid_set_options", 00:05:33.094 "bdev_raid_remove_base_bdev", 00:05:33.094 "bdev_raid_add_base_bdev", 00:05:33.094 "bdev_raid_delete", 00:05:33.094 "bdev_raid_create", 00:05:33.094 "bdev_raid_get_bdevs", 00:05:33.094 "bdev_error_inject_error", 00:05:33.094 "bdev_error_delete", 00:05:33.094 "bdev_error_create", 00:05:33.094 "bdev_split_delete", 00:05:33.094 "bdev_split_create", 00:05:33.094 "bdev_delay_delete", 00:05:33.094 "bdev_delay_create", 00:05:33.094 "bdev_delay_update_latency", 00:05:33.094 "bdev_zone_block_delete", 00:05:33.094 "bdev_zone_block_create", 00:05:33.094 "blobfs_create", 00:05:33.094 "blobfs_detect", 00:05:33.094 "blobfs_set_cache_size", 00:05:33.094 "bdev_aio_delete", 00:05:33.094 "bdev_aio_rescan", 00:05:33.094 "bdev_aio_create", 00:05:33.094 "bdev_ftl_set_property", 00:05:33.094 "bdev_ftl_get_properties", 00:05:33.094 "bdev_ftl_get_stats", 00:05:33.094 "bdev_ftl_unmap", 00:05:33.094 "bdev_ftl_unload", 00:05:33.094 "bdev_ftl_delete", 00:05:33.094 "bdev_ftl_load", 00:05:33.094 "bdev_ftl_create", 00:05:33.094 "bdev_virtio_attach_controller", 00:05:33.094 "bdev_virtio_scsi_get_devices", 00:05:33.094 "bdev_virtio_detach_controller", 00:05:33.094 "bdev_virtio_blk_set_hotplug", 00:05:33.094 "bdev_iscsi_delete", 00:05:33.094 "bdev_iscsi_create", 00:05:33.094 "bdev_iscsi_set_options", 00:05:33.094 "bdev_uring_delete", 00:05:33.094 "bdev_uring_rescan", 00:05:33.094 "bdev_uring_create", 00:05:33.094 "accel_error_inject_error", 00:05:33.094 "ioat_scan_accel_module", 00:05:33.094 "dsa_scan_accel_module", 00:05:33.094 "iaa_scan_accel_module", 00:05:33.094 "keyring_file_remove_key", 00:05:33.094 "keyring_file_add_key", 00:05:33.094 "keyring_linux_set_options", 00:05:33.094 "iscsi_get_histogram", 00:05:33.094 "iscsi_enable_histogram", 00:05:33.094 "iscsi_set_options", 00:05:33.094 "iscsi_get_auth_groups", 00:05:33.094 "iscsi_auth_group_remove_secret", 00:05:33.094 "iscsi_auth_group_add_secret", 00:05:33.094 "iscsi_delete_auth_group", 00:05:33.094 "iscsi_create_auth_group", 00:05:33.094 "iscsi_set_discovery_auth", 00:05:33.094 "iscsi_get_options", 00:05:33.094 "iscsi_target_node_request_logout", 00:05:33.094 "iscsi_target_node_set_redirect", 00:05:33.094 "iscsi_target_node_set_auth", 00:05:33.094 "iscsi_target_node_add_lun", 00:05:33.094 "iscsi_get_stats", 00:05:33.094 "iscsi_get_connections", 00:05:33.094 "iscsi_portal_group_set_auth", 00:05:33.094 "iscsi_start_portal_group", 00:05:33.094 "iscsi_delete_portal_group", 00:05:33.094 "iscsi_create_portal_group", 00:05:33.094 "iscsi_get_portal_groups", 00:05:33.094 "iscsi_delete_target_node", 00:05:33.094 "iscsi_target_node_remove_pg_ig_maps", 00:05:33.094 "iscsi_target_node_add_pg_ig_maps", 00:05:33.094 "iscsi_create_target_node", 00:05:33.094 "iscsi_get_target_nodes", 00:05:33.094 "iscsi_delete_initiator_group", 00:05:33.094 "iscsi_initiator_group_remove_initiators", 00:05:33.094 "iscsi_initiator_group_add_initiators", 00:05:33.094 "iscsi_create_initiator_group", 00:05:33.094 "iscsi_get_initiator_groups", 00:05:33.094 "nvmf_set_crdt", 00:05:33.094 "nvmf_set_config", 00:05:33.094 "nvmf_set_max_subsystems", 00:05:33.094 "nvmf_stop_mdns_prr", 00:05:33.094 "nvmf_publish_mdns_prr", 00:05:33.094 "nvmf_subsystem_get_listeners", 00:05:33.094 "nvmf_subsystem_get_qpairs", 00:05:33.094 "nvmf_subsystem_get_controllers", 00:05:33.094 "nvmf_get_stats", 00:05:33.094 "nvmf_get_transports", 00:05:33.094 "nvmf_create_transport", 00:05:33.094 "nvmf_get_targets", 00:05:33.094 "nvmf_delete_target", 00:05:33.094 "nvmf_create_target", 00:05:33.094 "nvmf_subsystem_allow_any_host", 00:05:33.094 "nvmf_subsystem_remove_host", 00:05:33.094 "nvmf_subsystem_add_host", 00:05:33.094 "nvmf_ns_remove_host", 00:05:33.094 "nvmf_ns_add_host", 00:05:33.094 "nvmf_subsystem_remove_ns", 00:05:33.094 "nvmf_subsystem_add_ns", 00:05:33.094 "nvmf_subsystem_listener_set_ana_state", 00:05:33.094 "nvmf_discovery_get_referrals", 00:05:33.094 "nvmf_discovery_remove_referral", 00:05:33.094 "nvmf_discovery_add_referral", 00:05:33.094 "nvmf_subsystem_remove_listener", 00:05:33.094 "nvmf_subsystem_add_listener", 00:05:33.094 "nvmf_delete_subsystem", 00:05:33.094 "nvmf_create_subsystem", 00:05:33.094 "nvmf_get_subsystems", 00:05:33.094 "env_dpdk_get_mem_stats", 00:05:33.094 "nbd_get_disks", 00:05:33.094 "nbd_stop_disk", 00:05:33.094 "nbd_start_disk", 00:05:33.094 "ublk_recover_disk", 00:05:33.094 "ublk_get_disks", 00:05:33.094 "ublk_stop_disk", 00:05:33.094 "ublk_start_disk", 00:05:33.094 "ublk_destroy_target", 00:05:33.094 "ublk_create_target", 00:05:33.094 "virtio_blk_create_transport", 00:05:33.094 "virtio_blk_get_transports", 00:05:33.094 "vhost_controller_set_coalescing", 00:05:33.094 "vhost_get_controllers", 00:05:33.094 "vhost_delete_controller", 00:05:33.094 "vhost_create_blk_controller", 00:05:33.094 "vhost_scsi_controller_remove_target", 00:05:33.094 "vhost_scsi_controller_add_target", 00:05:33.094 "vhost_start_scsi_controller", 00:05:33.094 "vhost_create_scsi_controller", 00:05:33.094 "thread_set_cpumask", 00:05:33.094 "framework_get_governor", 00:05:33.094 "framework_get_scheduler", 00:05:33.094 "framework_set_scheduler", 00:05:33.094 "framework_get_reactors", 00:05:33.094 "thread_get_io_channels", 00:05:33.094 "thread_get_pollers", 00:05:33.094 "thread_get_stats", 00:05:33.094 "framework_monitor_context_switch", 00:05:33.094 "spdk_kill_instance", 00:05:33.094 "log_enable_timestamps", 00:05:33.094 "log_get_flags", 00:05:33.094 "log_clear_flag", 00:05:33.094 "log_set_flag", 00:05:33.094 "log_get_level", 00:05:33.094 "log_set_level", 00:05:33.094 "log_get_print_level", 00:05:33.094 "log_set_print_level", 00:05:33.094 "framework_enable_cpumask_locks", 00:05:33.094 "framework_disable_cpumask_locks", 00:05:33.094 "framework_wait_init", 00:05:33.094 "framework_start_init", 00:05:33.094 "scsi_get_devices", 00:05:33.094 "bdev_get_histogram", 00:05:33.094 "bdev_enable_histogram", 00:05:33.094 "bdev_set_qos_limit", 00:05:33.094 "bdev_set_qd_sampling_period", 00:05:33.094 "bdev_get_bdevs", 00:05:33.094 "bdev_reset_iostat", 00:05:33.094 "bdev_get_iostat", 00:05:33.094 "bdev_examine", 00:05:33.094 "bdev_wait_for_examine", 00:05:33.094 "bdev_set_options", 00:05:33.094 "notify_get_notifications", 00:05:33.094 "notify_get_types", 00:05:33.094 "accel_get_stats", 00:05:33.094 "accel_set_options", 00:05:33.094 "accel_set_driver", 00:05:33.094 "accel_crypto_key_destroy", 00:05:33.094 "accel_crypto_keys_get", 00:05:33.094 "accel_crypto_key_create", 00:05:33.094 "accel_assign_opc", 00:05:33.094 "accel_get_module_info", 00:05:33.094 "accel_get_opc_assignments", 00:05:33.094 "vmd_rescan", 00:05:33.094 "vmd_remove_device", 00:05:33.094 "vmd_enable", 00:05:33.094 "sock_get_default_impl", 00:05:33.094 "sock_set_default_impl", 00:05:33.094 "sock_impl_set_options", 00:05:33.094 "sock_impl_get_options", 00:05:33.094 "iobuf_get_stats", 00:05:33.094 "iobuf_set_options", 00:05:33.094 "framework_get_pci_devices", 00:05:33.094 "framework_get_config", 00:05:33.094 "framework_get_subsystems", 00:05:33.094 "trace_get_info", 00:05:33.094 "trace_get_tpoint_group_mask", 00:05:33.094 "trace_disable_tpoint_group", 00:05:33.094 "trace_enable_tpoint_group", 00:05:33.094 "trace_clear_tpoint_mask", 00:05:33.094 "trace_set_tpoint_mask", 00:05:33.094 "keyring_get_keys", 00:05:33.094 "spdk_get_version", 00:05:33.094 "rpc_get_methods" 00:05:33.094 ] 00:05:33.094 19:44:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:33.094 19:44:01 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.094 19:44:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.094 19:44:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:33.094 19:44:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59760 00:05:33.094 19:44:01 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 59760 ']' 00:05:33.094 19:44:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 59760 00:05:33.094 19:44:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:33.094 19:44:01 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.094 19:44:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59760 00:05:33.352 19:44:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.352 killing process with pid 59760 00:05:33.352 19:44:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.352 19:44:01 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59760' 00:05:33.352 19:44:01 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 59760 00:05:33.352 19:44:01 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 59760 00:05:33.611 00:05:33.611 real 0m1.776s 00:05:33.611 user 0m3.261s 00:05:33.611 sys 0m0.444s 00:05:33.611 19:44:02 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.611 19:44:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.611 ************************************ 00:05:33.611 END TEST spdkcli_tcp 00:05:33.611 ************************************ 00:05:33.611 19:44:02 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:33.611 19:44:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.611 19:44:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.611 19:44:02 -- common/autotest_common.sh@10 -- # set +x 00:05:33.611 ************************************ 00:05:33.611 START TEST dpdk_mem_utility 00:05:33.611 ************************************ 00:05:33.611 19:44:02 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:33.611 * Looking for test storage... 00:05:33.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:33.611 19:44:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:33.611 19:44:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59845 00:05:33.611 19:44:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:33.611 19:44:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59845 00:05:33.611 19:44:02 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 59845 ']' 00:05:33.611 19:44:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.611 19:44:02 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:33.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.611 19:44:02 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.611 19:44:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:33.611 19:44:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.870 [2024-07-24 19:44:02.321664] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:33.870 [2024-07-24 19:44:02.321780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59845 ] 00:05:33.870 [2024-07-24 19:44:02.457768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.129 [2024-07-24 19:44:02.586708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.129 [2024-07-24 19:44:02.644539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:34.696 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.696 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:34.696 19:44:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:34.696 19:44:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:34.696 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.696 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.696 { 00:05:34.697 "filename": "/tmp/spdk_mem_dump.txt" 00:05:34.697 } 00:05:34.697 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.697 19:44:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:34.697 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:34.697 1 heaps totaling size 814.000000 MiB 00:05:34.697 size: 814.000000 MiB heap id: 0 00:05:34.697 end heaps---------- 00:05:34.697 8 mempools totaling size 598.116089 MiB 00:05:34.697 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:34.697 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:34.697 size: 84.521057 MiB name: bdev_io_59845 00:05:34.697 size: 51.011292 MiB name: evtpool_59845 00:05:34.697 size: 50.003479 MiB name: msgpool_59845 00:05:34.697 size: 21.763794 MiB name: PDU_Pool 00:05:34.697 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:34.697 size: 0.026123 MiB name: Session_Pool 00:05:34.697 end mempools------- 00:05:34.697 6 memzones totaling size 4.142822 MiB 00:05:34.697 size: 1.000366 MiB name: RG_ring_0_59845 00:05:34.697 size: 1.000366 MiB name: RG_ring_1_59845 00:05:34.697 size: 1.000366 MiB name: RG_ring_4_59845 00:05:34.697 size: 1.000366 MiB name: RG_ring_5_59845 00:05:34.697 size: 0.125366 MiB name: RG_ring_2_59845 00:05:34.697 size: 0.015991 MiB name: RG_ring_3_59845 00:05:34.697 end memzones------- 00:05:34.697 19:44:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:34.956 heap id: 0 total size: 814.000000 MiB number of busy elements: 297 number of free elements: 15 00:05:34.956 list of free elements. size: 12.472473 MiB 00:05:34.956 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:34.956 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:34.956 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:34.956 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:34.956 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:34.956 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:34.956 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:34.956 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:34.956 element at address: 0x200000200000 with size: 0.833191 MiB 00:05:34.956 element at address: 0x20001aa00000 with size: 0.569702 MiB 00:05:34.956 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:34.956 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:34.956 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:34.957 element at address: 0x200027e00000 with size: 0.395935 MiB 00:05:34.957 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:34.957 list of standard malloc elements. size: 199.264954 MiB 00:05:34.957 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:34.957 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:34.957 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:34.957 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:34.957 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:34.957 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:34.957 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:34.957 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:34.957 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:34.957 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:34.957 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:34.958 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e65680 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:34.958 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:34.959 list of memzone associated elements. size: 602.262573 MiB 00:05:34.959 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:34.959 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:34.959 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:34.959 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:34.959 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:34.959 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59845_0 00:05:34.959 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:34.959 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59845_0 00:05:34.959 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:34.959 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59845_0 00:05:34.959 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:34.959 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:34.959 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:34.959 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:34.959 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:34.959 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59845 00:05:34.959 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:34.959 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59845 00:05:34.959 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:34.959 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59845 00:05:34.959 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:34.959 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:34.959 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:34.959 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:34.959 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:34.959 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:34.959 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:34.959 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:34.959 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:34.959 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59845 00:05:34.959 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:34.959 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59845 00:05:34.959 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:34.959 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59845 00:05:34.959 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:34.959 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59845 00:05:34.959 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:34.959 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59845 00:05:34.959 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:34.959 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:34.959 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:34.959 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:34.959 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:34.959 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:34.959 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:34.959 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59845 00:05:34.959 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:34.959 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:34.959 element at address: 0x200027e65740 with size: 0.023743 MiB 00:05:34.959 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:34.959 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:34.959 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59845 00:05:34.959 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:05:34.959 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:34.959 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:34.959 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59845 00:05:34.959 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:34.959 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59845 00:05:34.959 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:05:34.959 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:34.959 19:44:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:34.959 19:44:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59845 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 59845 ']' 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 59845 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59845 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.959 killing process with pid 59845 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59845' 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 59845 00:05:34.959 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 59845 00:05:35.217 00:05:35.217 real 0m1.629s 00:05:35.217 user 0m1.742s 00:05:35.217 sys 0m0.415s 00:05:35.217 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.217 19:44:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.217 ************************************ 00:05:35.217 END TEST dpdk_mem_utility 00:05:35.217 ************************************ 00:05:35.217 19:44:03 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.217 19:44:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.217 19:44:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.217 19:44:03 -- common/autotest_common.sh@10 -- # set +x 00:05:35.217 ************************************ 00:05:35.217 START TEST event 00:05:35.217 ************************************ 00:05:35.217 19:44:03 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:35.476 * Looking for test storage... 00:05:35.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.476 19:44:03 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:35.476 19:44:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:35.476 19:44:03 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.476 19:44:03 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:35.476 19:44:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.476 19:44:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.476 ************************************ 00:05:35.476 START TEST event_perf 00:05:35.476 ************************************ 00:05:35.476 19:44:03 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:35.476 Running I/O for 1 seconds...[2024-07-24 19:44:03.958431] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:35.476 [2024-07-24 19:44:03.958526] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:05:35.476 [2024-07-24 19:44:04.100106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.735 [2024-07-24 19:44:04.229323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.735 [2024-07-24 19:44:04.229425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.735 [2024-07-24 19:44:04.229516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.735 [2024-07-24 19:44:04.229522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.774 Running I/O for 1 seconds... 00:05:36.774 lcore 0: 205727 00:05:36.774 lcore 1: 205725 00:05:36.774 lcore 2: 205727 00:05:36.774 lcore 3: 205728 00:05:36.774 done. 00:05:36.774 00:05:36.774 real 0m1.380s 00:05:36.774 user 0m4.182s 00:05:36.774 sys 0m0.068s 00:05:36.774 19:44:05 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.774 19:44:05 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.774 ************************************ 00:05:36.774 END TEST event_perf 00:05:36.774 ************************************ 00:05:36.774 19:44:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:36.774 19:44:05 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:36.774 19:44:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.774 19:44:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.774 ************************************ 00:05:36.774 START TEST event_reactor 00:05:36.774 ************************************ 00:05:36.774 19:44:05 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:36.774 [2024-07-24 19:44:05.385219] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:36.774 [2024-07-24 19:44:05.385304] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59957 ] 00:05:37.032 [2024-07-24 19:44:05.517811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.032 [2024-07-24 19:44:05.635586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.409 test_start 00:05:38.409 oneshot 00:05:38.409 tick 100 00:05:38.409 tick 100 00:05:38.409 tick 250 00:05:38.409 tick 100 00:05:38.409 tick 100 00:05:38.409 tick 250 00:05:38.409 tick 500 00:05:38.409 tick 100 00:05:38.409 tick 100 00:05:38.409 tick 100 00:05:38.409 tick 250 00:05:38.409 tick 100 00:05:38.409 tick 100 00:05:38.409 test_end 00:05:38.409 00:05:38.409 real 0m1.355s 00:05:38.409 user 0m1.195s 00:05:38.409 sys 0m0.053s 00:05:38.409 19:44:06 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.409 19:44:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:38.409 ************************************ 00:05:38.409 END TEST event_reactor 00:05:38.409 ************************************ 00:05:38.409 19:44:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.409 19:44:06 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:38.409 19:44:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.409 19:44:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.409 ************************************ 00:05:38.409 START TEST event_reactor_perf 00:05:38.409 ************************************ 00:05:38.409 19:44:06 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.409 [2024-07-24 19:44:06.782688] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:38.409 [2024-07-24 19:44:06.782845] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59993 ] 00:05:38.409 [2024-07-24 19:44:06.912360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.409 [2024-07-24 19:44:07.029760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.779 test_start 00:05:39.779 test_end 00:05:39.779 Performance: 372458 events per second 00:05:39.779 00:05:39.779 real 0m1.351s 00:05:39.779 user 0m1.187s 00:05:39.780 sys 0m0.054s 00:05:39.780 ************************************ 00:05:39.780 END TEST event_reactor_perf 00:05:39.780 ************************************ 00:05:39.780 19:44:08 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.780 19:44:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.780 19:44:08 event -- event/event.sh@49 -- # uname -s 00:05:39.780 19:44:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:39.780 19:44:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:39.780 19:44:08 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.780 19:44:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.780 19:44:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.780 ************************************ 00:05:39.780 START TEST event_scheduler 00:05:39.780 ************************************ 00:05:39.780 19:44:08 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:39.780 * Looking for test storage... 00:05:39.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:39.780 19:44:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:39.780 19:44:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60054 00:05:39.780 19:44:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:39.780 19:44:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.780 19:44:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60054 00:05:39.780 19:44:08 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 60054 ']' 00:05:39.780 19:44:08 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.780 19:44:08 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.780 19:44:08 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.780 19:44:08 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.780 19:44:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.780 [2024-07-24 19:44:08.292793] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:39.780 [2024-07-24 19:44:08.293244] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60054 ] 00:05:39.780 [2024-07-24 19:44:08.438261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.051 [2024-07-24 19:44:08.559495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.051 [2024-07-24 19:44:08.559577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.051 [2024-07-24 19:44:08.559675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.051 [2024-07-24 19:44:08.559682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.984 19:44:09 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.984 19:44:09 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:40.984 19:44:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:40.984 19:44:09 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.984 19:44:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.984 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:40.984 POWER: Cannot set governor of lcore 0 to userspace 00:05:40.984 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:40.984 POWER: Cannot set governor of lcore 0 to performance 00:05:40.984 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:40.984 POWER: Cannot set governor of lcore 0 to userspace 00:05:40.984 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:40.984 POWER: Cannot set governor of lcore 0 to userspace 00:05:40.984 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:40.984 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:40.984 POWER: Unable to set Power Management Environment for lcore 0 00:05:40.984 [2024-07-24 19:44:09.401994] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:40.984 [2024-07-24 19:44:09.402010] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:40.984 [2024-07-24 19:44:09.402019] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:40.984 [2024-07-24 19:44:09.402031] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:40.984 [2024-07-24 19:44:09.402039] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:40.984 [2024-07-24 19:44:09.402046] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:40.984 19:44:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.984 19:44:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:40.984 19:44:09 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.984 19:44:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.984 [2024-07-24 19:44:09.462204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:40.984 [2024-07-24 19:44:09.497032] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:40.985 19:44:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:40.985 19:44:09 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.985 19:44:09 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 ************************************ 00:05:40.985 START TEST scheduler_create_thread 00:05:40.985 ************************************ 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 2 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 3 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 4 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 5 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 6 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 7 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 8 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 9 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 10 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.985 19:44:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.881 19:44:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.881 19:44:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.881 19:44:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.881 19:44:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.881 19:44:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.814 ************************************ 00:05:43.814 END TEST scheduler_create_thread 00:05:43.814 ************************************ 00:05:43.814 19:44:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.814 00:05:43.814 real 0m2.611s 00:05:43.814 user 0m0.016s 00:05:43.814 sys 0m0.006s 00:05:43.814 19:44:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.814 19:44:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.814 19:44:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.814 19:44:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60054 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 60054 ']' 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 60054 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60054 00:05:43.814 killing process with pid 60054 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60054' 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 60054 00:05:43.814 19:44:12 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 60054 00:05:44.072 [2024-07-24 19:44:12.597529] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:44.330 00:05:44.330 real 0m4.670s 00:05:44.330 user 0m9.129s 00:05:44.330 sys 0m0.362s 00:05:44.330 19:44:12 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.330 19:44:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.330 ************************************ 00:05:44.330 END TEST event_scheduler 00:05:44.330 ************************************ 00:05:44.330 19:44:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:44.330 19:44:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:44.330 19:44:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.330 19:44:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.330 19:44:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.330 ************************************ 00:05:44.330 START TEST app_repeat 00:05:44.330 ************************************ 00:05:44.330 19:44:12 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:44.330 Process app_repeat pid: 60154 00:05:44.330 spdk_app_start Round 0 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60154 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60154' 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:44.330 19:44:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60154 /var/tmp/spdk-nbd.sock 00:05:44.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.330 19:44:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60154 ']' 00:05:44.330 19:44:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.330 19:44:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.330 19:44:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.330 19:44:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.330 19:44:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.330 [2024-07-24 19:44:12.912752] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:05:44.330 [2024-07-24 19:44:12.913065] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60154 ] 00:05:44.588 [2024-07-24 19:44:13.044044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.588 [2024-07-24 19:44:13.160886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.588 [2024-07-24 19:44:13.160896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.588 [2024-07-24 19:44:13.213774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.522 19:44:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.522 19:44:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.522 19:44:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.089 Malloc0 00:05:46.089 19:44:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.089 Malloc1 00:05:46.349 19:44:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.349 19:44:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.349 /dev/nbd0 00:05:46.608 19:44:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.608 19:44:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.608 1+0 records in 00:05:46.608 1+0 records out 00:05:46.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047421 s, 8.6 MB/s 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.608 19:44:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.608 19:44:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.608 19:44:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.608 19:44:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.608 /dev/nbd1 00:05:46.867 19:44:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.867 19:44:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.867 1+0 records in 00:05:46.867 1+0 records out 00:05:46.867 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808258 s, 5.1 MB/s 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.867 19:44:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.867 19:44:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.867 19:44:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.867 19:44:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.867 19:44:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.867 19:44:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.126 { 00:05:47.126 "nbd_device": "/dev/nbd0", 00:05:47.126 "bdev_name": "Malloc0" 00:05:47.126 }, 00:05:47.126 { 00:05:47.126 "nbd_device": "/dev/nbd1", 00:05:47.126 "bdev_name": "Malloc1" 00:05:47.126 } 00:05:47.126 ]' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.126 { 00:05:47.126 "nbd_device": "/dev/nbd0", 00:05:47.126 "bdev_name": "Malloc0" 00:05:47.126 }, 00:05:47.126 { 00:05:47.126 "nbd_device": "/dev/nbd1", 00:05:47.126 "bdev_name": "Malloc1" 00:05:47.126 } 00:05:47.126 ]' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.126 /dev/nbd1' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.126 /dev/nbd1' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.126 256+0 records in 00:05:47.126 256+0 records out 00:05:47.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00687696 s, 152 MB/s 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.126 256+0 records in 00:05:47.126 256+0 records out 00:05:47.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250323 s, 41.9 MB/s 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.126 256+0 records in 00:05:47.126 256+0 records out 00:05:47.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298791 s, 35.1 MB/s 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.126 19:44:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.694 19:44:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.953 19:44:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.211 19:44:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.211 19:44:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.468 19:44:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.727 [2024-07-24 19:44:17.207715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.727 [2024-07-24 19:44:17.320820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.727 [2024-07-24 19:44:17.320831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.727 [2024-07-24 19:44:17.372908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:48.727 [2024-07-24 19:44:17.373010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.727 [2024-07-24 19:44:17.373025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.011 spdk_app_start Round 1 00:05:52.011 19:44:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.011 19:44:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:52.011 19:44:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60154 /var/tmp/spdk-nbd.sock 00:05:52.011 19:44:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60154 ']' 00:05:52.011 19:44:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.011 19:44:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.011 19:44:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.011 19:44:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.011 19:44:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.011 19:44:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.011 19:44:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:52.011 19:44:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.011 Malloc0 00:05:52.011 19:44:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.270 Malloc1 00:05:52.270 19:44:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.270 19:44:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.528 /dev/nbd0 00:05:52.528 19:44:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.528 19:44:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.528 19:44:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:52.528 19:44:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.528 19:44:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.529 1+0 records in 00:05:52.529 1+0 records out 00:05:52.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889651 s, 4.6 MB/s 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.529 19:44:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.529 19:44:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.529 19:44:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.529 19:44:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.787 /dev/nbd1 00:05:52.787 19:44:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.787 19:44:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.787 1+0 records in 00:05:52.787 1+0 records out 00:05:52.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000936794 s, 4.4 MB/s 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.787 19:44:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.787 19:44:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.787 19:44:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.787 19:44:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.787 19:44:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.787 19:44:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.354 { 00:05:53.354 "nbd_device": "/dev/nbd0", 00:05:53.354 "bdev_name": "Malloc0" 00:05:53.354 }, 00:05:53.354 { 00:05:53.354 "nbd_device": "/dev/nbd1", 00:05:53.354 "bdev_name": "Malloc1" 00:05:53.354 } 00:05:53.354 ]' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.354 { 00:05:53.354 "nbd_device": "/dev/nbd0", 00:05:53.354 "bdev_name": "Malloc0" 00:05:53.354 }, 00:05:53.354 { 00:05:53.354 "nbd_device": "/dev/nbd1", 00:05:53.354 "bdev_name": "Malloc1" 00:05:53.354 } 00:05:53.354 ]' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.354 /dev/nbd1' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.354 /dev/nbd1' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.354 256+0 records in 00:05:53.354 256+0 records out 00:05:53.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00828917 s, 126 MB/s 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.354 256+0 records in 00:05:53.354 256+0 records out 00:05:53.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231238 s, 45.3 MB/s 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.354 256+0 records in 00:05:53.354 256+0 records out 00:05:53.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245219 s, 42.8 MB/s 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.354 19:44:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.612 19:44:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.612 19:44:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.612 19:44:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.612 19:44:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.612 19:44:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.613 19:44:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.613 19:44:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.613 19:44:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.613 19:44:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.613 19:44:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.872 19:44:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.130 19:44:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.130 19:44:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.389 19:44:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.648 [2024-07-24 19:44:23.247756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.910 [2024-07-24 19:44:23.363454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.910 [2024-07-24 19:44:23.363466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.910 [2024-07-24 19:44:23.417537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:54.910 [2024-07-24 19:44:23.417628] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.910 [2024-07-24 19:44:23.417642] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.476 spdk_app_start Round 2 00:05:57.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.476 19:44:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.476 19:44:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:57.476 19:44:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60154 /var/tmp/spdk-nbd.sock 00:05:57.476 19:44:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60154 ']' 00:05:57.476 19:44:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.476 19:44:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.476 19:44:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.476 19:44:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.476 19:44:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.733 19:44:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.733 19:44:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.733 19:44:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.991 Malloc0 00:05:57.991 19:44:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.250 Malloc1 00:05:58.508 19:44:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.508 19:44:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.509 19:44:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.509 19:44:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.509 19:44:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.509 /dev/nbd0 00:05:58.767 19:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.767 19:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.767 1+0 records in 00:05:58.767 1+0 records out 00:05:58.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760778 s, 5.4 MB/s 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:58.767 19:44:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:58.767 19:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.767 19:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.767 19:44:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.026 /dev/nbd1 00:05:59.026 19:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.026 19:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.026 1+0 records in 00:05:59.026 1+0 records out 00:05:59.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419875 s, 9.8 MB/s 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:59.026 19:44:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:59.026 19:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.026 19:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.026 19:44:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.026 19:44:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.026 19:44:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.285 { 00:05:59.285 "nbd_device": "/dev/nbd0", 00:05:59.285 "bdev_name": "Malloc0" 00:05:59.285 }, 00:05:59.285 { 00:05:59.285 "nbd_device": "/dev/nbd1", 00:05:59.285 "bdev_name": "Malloc1" 00:05:59.285 } 00:05:59.285 ]' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.285 { 00:05:59.285 "nbd_device": "/dev/nbd0", 00:05:59.285 "bdev_name": "Malloc0" 00:05:59.285 }, 00:05:59.285 { 00:05:59.285 "nbd_device": "/dev/nbd1", 00:05:59.285 "bdev_name": "Malloc1" 00:05:59.285 } 00:05:59.285 ]' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.285 /dev/nbd1' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.285 /dev/nbd1' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.285 256+0 records in 00:05:59.285 256+0 records out 00:05:59.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635877 s, 165 MB/s 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.285 256+0 records in 00:05:59.285 256+0 records out 00:05:59.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269461 s, 38.9 MB/s 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.285 256+0 records in 00:05:59.285 256+0 records out 00:05:59.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323655 s, 32.4 MB/s 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.285 19:44:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.544 19:44:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.802 19:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.802 19:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.802 19:44:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.802 19:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.802 19:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.803 19:44:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.803 19:44:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.803 19:44:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.803 19:44:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.803 19:44:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.803 19:44:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.061 19:44:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.061 19:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.061 19:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.319 19:44:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.319 19:44:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.577 19:44:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.861 [2024-07-24 19:44:29.257956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.861 [2024-07-24 19:44:29.376301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.861 [2024-07-24 19:44:29.376310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.861 [2024-07-24 19:44:29.429804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:00.861 [2024-07-24 19:44:29.429890] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.861 [2024-07-24 19:44:29.429905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.147 19:44:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60154 /var/tmp/spdk-nbd.sock 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 60154 ']' 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:04.147 19:44:32 event.app_repeat -- event/event.sh@39 -- # killprocess 60154 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 60154 ']' 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 60154 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60154 00:06:04.147 killing process with pid 60154 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60154' 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@969 -- # kill 60154 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@974 -- # wait 60154 00:06:04.147 spdk_app_start is called in Round 0. 00:06:04.147 Shutdown signal received, stop current app iteration 00:06:04.147 Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 reinitialization... 00:06:04.147 spdk_app_start is called in Round 1. 00:06:04.147 Shutdown signal received, stop current app iteration 00:06:04.147 Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 reinitialization... 00:06:04.147 spdk_app_start is called in Round 2. 00:06:04.147 Shutdown signal received, stop current app iteration 00:06:04.147 Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 reinitialization... 00:06:04.147 spdk_app_start is called in Round 3. 00:06:04.147 Shutdown signal received, stop current app iteration 00:06:04.147 ************************************ 00:06:04.147 END TEST app_repeat 00:06:04.147 ************************************ 00:06:04.147 19:44:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:04.147 19:44:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:04.147 00:06:04.147 real 0m19.729s 00:06:04.147 user 0m44.430s 00:06:04.147 sys 0m3.044s 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.147 19:44:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.147 19:44:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:04.147 19:44:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.147 19:44:32 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.148 19:44:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.148 19:44:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.148 ************************************ 00:06:04.148 START TEST cpu_locks 00:06:04.148 ************************************ 00:06:04.148 19:44:32 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:04.148 * Looking for test storage... 00:06:04.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:04.148 19:44:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:04.148 19:44:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:04.148 19:44:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:04.148 19:44:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:04.148 19:44:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.148 19:44:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.148 19:44:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.148 ************************************ 00:06:04.148 START TEST default_locks 00:06:04.148 ************************************ 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60593 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60593 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60593 ']' 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.148 19:44:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.406 [2024-07-24 19:44:32.836698] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:04.406 [2024-07-24 19:44:32.836848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60593 ] 00:06:04.406 [2024-07-24 19:44:32.975256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.665 [2024-07-24 19:44:33.094944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.665 [2024-07-24 19:44:33.152668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:05.230 19:44:33 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.230 19:44:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:05.230 19:44:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60593 00:06:05.230 19:44:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60593 00:06:05.230 19:44:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60593 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 60593 ']' 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 60593 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60593 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.794 killing process with pid 60593 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60593' 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 60593 00:06:05.794 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 60593 00:06:06.052 19:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60593 00:06:06.052 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:06.052 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60593 00:06:06.052 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.052 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.052 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.052 ERROR: process (pid: 60593) is no longer running 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 60593 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 60593 ']' 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60593) - No such process 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.053 00:06:06.053 real 0m1.884s 00:06:06.053 user 0m2.045s 00:06:06.053 sys 0m0.569s 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.053 19:44:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 ************************************ 00:06:06.053 END TEST default_locks 00:06:06.053 ************************************ 00:06:06.053 19:44:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:06.053 19:44:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.053 19:44:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.053 19:44:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.053 ************************************ 00:06:06.053 START TEST default_locks_via_rpc 00:06:06.053 ************************************ 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60645 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60645 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60645 ']' 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.053 19:44:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.311 [2024-07-24 19:44:34.773060] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:06.311 [2024-07-24 19:44:34.773179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60645 ] 00:06:06.311 [2024-07-24 19:44:34.911760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.571 [2024-07-24 19:44:35.039810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.571 [2024-07-24 19:44:35.095869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60645 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60645 00:06:07.138 19:44:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.396 19:44:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60645 00:06:07.396 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 60645 ']' 00:06:07.396 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 60645 00:06:07.659 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:07.659 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.659 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60645 00:06:07.659 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.659 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.659 killing process with pid 60645 00:06:07.659 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60645' 00:06:07.659 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 60645 00:06:07.659 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 60645 00:06:07.952 00:06:07.952 real 0m1.759s 00:06:07.952 user 0m1.837s 00:06:07.952 sys 0m0.525s 00:06:07.952 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.952 19:44:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.952 ************************************ 00:06:07.952 END TEST default_locks_via_rpc 00:06:07.952 ************************************ 00:06:07.952 19:44:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:07.952 19:44:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.952 19:44:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.952 19:44:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.952 ************************************ 00:06:07.952 START TEST non_locking_app_on_locked_coremask 00:06:07.952 ************************************ 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60696 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60696 /var/tmp/spdk.sock 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60696 ']' 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.952 19:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.952 [2024-07-24 19:44:36.567243] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:07.952 [2024-07-24 19:44:36.567341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60696 ] 00:06:08.212 [2024-07-24 19:44:36.699087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.212 [2024-07-24 19:44:36.815425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.212 [2024-07-24 19:44:36.867812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60712 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60712 /var/tmp/spdk2.sock 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60712 ']' 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.148 19:44:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.148 [2024-07-24 19:44:37.637160] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:09.148 [2024-07-24 19:44:37.637248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60712 ] 00:06:09.148 [2024-07-24 19:44:37.780923] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.148 [2024-07-24 19:44:37.780988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.407 [2024-07-24 19:44:38.012854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.665 [2024-07-24 19:44:38.117040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.233 19:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.233 19:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:10.233 19:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60696 00:06:10.233 19:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60696 00:06:10.233 19:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60696 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60696 ']' 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60696 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60696 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.800 killing process with pid 60696 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60696' 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60696 00:06:10.800 19:44:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60696 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60712 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60712 ']' 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60712 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60712 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.745 killing process with pid 60712 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60712' 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60712 00:06:11.745 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60712 00:06:12.003 00:06:12.003 real 0m4.063s 00:06:12.003 user 0m4.601s 00:06:12.003 sys 0m1.000s 00:06:12.003 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.003 19:44:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.003 ************************************ 00:06:12.003 END TEST non_locking_app_on_locked_coremask 00:06:12.003 ************************************ 00:06:12.003 19:44:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.003 19:44:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.003 19:44:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.003 19:44:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.003 ************************************ 00:06:12.003 START TEST locking_app_on_unlocked_coremask 00:06:12.003 ************************************ 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60779 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60779 /var/tmp/spdk.sock 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60779 ']' 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.003 19:44:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.262 [2024-07-24 19:44:40.692187] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:12.262 [2024-07-24 19:44:40.692292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60779 ] 00:06:12.262 [2024-07-24 19:44:40.830822] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.262 [2024-07-24 19:44:40.830884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.521 [2024-07-24 19:44:40.947477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.521 [2024-07-24 19:44:41.001522] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60795 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60795 /var/tmp/spdk2.sock 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60795 ']' 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.088 19:44:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.088 [2024-07-24 19:44:41.751249] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:13.088 [2024-07-24 19:44:41.751351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60795 ] 00:06:13.346 [2024-07-24 19:44:41.897148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.604 [2024-07-24 19:44:42.128704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.604 [2024-07-24 19:44:42.237951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.171 19:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.171 19:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.171 19:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60795 00:06:14.171 19:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60795 00:06:14.171 19:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60779 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60779 ']' 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60779 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60779 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.160 killing process with pid 60779 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60779' 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60779 00:06:15.160 19:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60779 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60795 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60795 ']' 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60795 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60795 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.093 killing process with pid 60795 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60795' 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60795 00:06:16.093 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60795 00:06:16.350 00:06:16.350 real 0m4.234s 00:06:16.350 user 0m4.733s 00:06:16.350 sys 0m1.140s 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.350 ************************************ 00:06:16.350 END TEST locking_app_on_unlocked_coremask 00:06:16.350 ************************************ 00:06:16.350 19:44:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:16.350 19:44:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.350 19:44:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.350 19:44:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.350 ************************************ 00:06:16.350 START TEST locking_app_on_locked_coremask 00:06:16.350 ************************************ 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60862 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60862 /var/tmp/spdk.sock 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60862 ']' 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.350 19:44:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.351 [2024-07-24 19:44:44.966718] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:16.351 [2024-07-24 19:44:44.966830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60862 ] 00:06:16.607 [2024-07-24 19:44:45.096685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.607 [2024-07-24 19:44:45.225357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.864 [2024-07-24 19:44:45.284332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60878 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60878 /var/tmp/spdk2.sock 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60878 /var/tmp/spdk2.sock 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60878 /var/tmp/spdk2.sock 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60878 ']' 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.430 19:44:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 [2024-07-24 19:44:46.006937] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:17.430 [2024-07-24 19:44:46.007052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60878 ] 00:06:17.688 [2024-07-24 19:44:46.145207] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60862 has claimed it. 00:06:17.688 [2024-07-24 19:44:46.145296] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.252 ERROR: process (pid: 60878) is no longer running 00:06:18.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60878) - No such process 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60862 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60862 00:06:18.252 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60862 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60862 ']' 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60862 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60862 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.511 killing process with pid 60862 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60862' 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60862 00:06:18.511 19:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60862 00:06:18.770 00:06:18.770 real 0m2.473s 00:06:18.770 user 0m2.818s 00:06:18.770 sys 0m0.594s 00:06:18.770 19:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.770 19:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.770 ************************************ 00:06:18.770 END TEST locking_app_on_locked_coremask 00:06:18.770 ************************************ 00:06:18.770 19:44:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.770 19:44:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.770 19:44:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.770 19:44:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.030 ************************************ 00:06:19.030 START TEST locking_overlapped_coremask 00:06:19.030 ************************************ 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60924 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60924 /var/tmp/spdk.sock 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60924 ']' 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.030 19:44:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.030 [2024-07-24 19:44:47.501946] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:19.030 [2024-07-24 19:44:47.502057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60924 ] 00:06:19.030 [2024-07-24 19:44:47.639574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.287 [2024-07-24 19:44:47.755629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.287 [2024-07-24 19:44:47.755774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.287 [2024-07-24 19:44:47.755786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.287 [2024-07-24 19:44:47.812229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60942 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60942 /var/tmp/spdk2.sock 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60942 /var/tmp/spdk2.sock 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.890 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.891 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.891 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60942 /var/tmp/spdk2.sock 00:06:19.891 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60942 ']' 00:06:19.891 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.891 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.891 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.891 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.891 19:44:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.149 [2024-07-24 19:44:48.575235] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:20.149 [2024-07-24 19:44:48.575342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60942 ] 00:06:20.149 [2024-07-24 19:44:48.720674] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60924 has claimed it. 00:06:20.149 [2024-07-24 19:44:48.720736] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.715 ERROR: process (pid: 60942) is no longer running 00:06:20.715 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60942) - No such process 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60924 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60924 ']' 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60924 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60924 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.715 killing process with pid 60924 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60924' 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60924 00:06:20.715 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60924 00:06:21.279 00:06:21.279 real 0m2.279s 00:06:21.279 user 0m6.304s 00:06:21.279 sys 0m0.484s 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.279 ************************************ 00:06:21.279 END TEST locking_overlapped_coremask 00:06:21.279 ************************************ 00:06:21.279 19:44:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.279 19:44:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.279 19:44:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.279 19:44:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.279 ************************************ 00:06:21.279 START TEST locking_overlapped_coremask_via_rpc 00:06:21.279 ************************************ 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60982 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60982 /var/tmp/spdk.sock 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60982 ']' 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.279 19:44:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.279 [2024-07-24 19:44:49.822165] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:21.279 [2024-07-24 19:44:49.822281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60982 ] 00:06:21.537 [2024-07-24 19:44:49.958584] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.537 [2024-07-24 19:44:49.958697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.537 [2024-07-24 19:44:50.073788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.537 [2024-07-24 19:44:50.073890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.537 [2024-07-24 19:44:50.073898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.537 [2024-07-24 19:44:50.129515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61000 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61000 /var/tmp/spdk2.sock 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61000 ']' 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.473 19:44:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.473 [2024-07-24 19:44:50.865930] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:22.473 [2024-07-24 19:44:50.866014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61000 ] 00:06:22.473 [2024-07-24 19:44:51.008884] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.473 [2024-07-24 19:44:51.008944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.732 [2024-07-24 19:44:51.212553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.732 [2024-07-24 19:44:51.219885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.732 [2024-07-24 19:44:51.219886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.732 [2024-07-24 19:44:51.321193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.299 [2024-07-24 19:44:51.847905] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60982 has claimed it. 00:06:23.299 request: 00:06:23.299 { 00:06:23.299 "method": "framework_enable_cpumask_locks", 00:06:23.299 "req_id": 1 00:06:23.299 } 00:06:23.299 Got JSON-RPC error response 00:06:23.299 response: 00:06:23.299 { 00:06:23.299 "code": -32603, 00:06:23.299 "message": "Failed to claim CPU core: 2" 00:06:23.299 } 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60982 /var/tmp/spdk.sock 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60982 ']' 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.299 19:44:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61000 /var/tmp/spdk2.sock 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 61000 ']' 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.557 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.816 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.816 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:23.816 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:23.816 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.816 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.816 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.816 00:06:23.816 real 0m2.661s 00:06:23.816 user 0m1.377s 00:06:23.816 sys 0m0.206s 00:06:23.816 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.816 19:44:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.816 ************************************ 00:06:23.816 END TEST locking_overlapped_coremask_via_rpc 00:06:23.816 ************************************ 00:06:23.816 19:44:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:23.816 19:44:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60982 ]] 00:06:23.816 19:44:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60982 00:06:23.816 19:44:52 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60982 ']' 00:06:23.816 19:44:52 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60982 00:06:23.816 19:44:52 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:23.816 19:44:52 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.816 19:44:52 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60982 00:06:24.075 19:44:52 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.075 19:44:52 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.075 killing process with pid 60982 00:06:24.075 19:44:52 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60982' 00:06:24.075 19:44:52 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60982 00:06:24.075 19:44:52 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60982 00:06:24.334 19:44:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61000 ]] 00:06:24.334 19:44:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61000 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61000 ']' 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61000 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61000 00:06:24.334 killing process with pid 61000 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61000' 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 61000 00:06:24.334 19:44:52 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 61000 00:06:24.942 19:44:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.942 19:44:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.942 19:44:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60982 ]] 00:06:24.942 19:44:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60982 00:06:24.942 19:44:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60982 ']' 00:06:24.942 19:44:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60982 00:06:24.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60982) - No such process 00:06:24.942 Process with pid 60982 is not found 00:06:24.942 Process with pid 61000 is not found 00:06:24.942 19:44:53 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60982 is not found' 00:06:24.942 19:44:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61000 ]] 00:06:24.942 19:44:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61000 00:06:24.942 19:44:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 61000 ']' 00:06:24.942 19:44:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 61000 00:06:24.942 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (61000) - No such process 00:06:24.942 19:44:53 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 61000 is not found' 00:06:24.942 19:44:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.942 00:06:24.942 real 0m20.642s 00:06:24.942 user 0m36.183s 00:06:24.942 sys 0m5.357s 00:06:24.942 19:44:53 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.942 19:44:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 ************************************ 00:06:24.942 END TEST cpu_locks 00:06:24.942 ************************************ 00:06:24.942 ************************************ 00:06:24.942 END TEST event 00:06:24.942 ************************************ 00:06:24.942 00:06:24.942 real 0m49.490s 00:06:24.942 user 1m36.429s 00:06:24.942 sys 0m9.155s 00:06:24.942 19:44:53 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.942 19:44:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 19:44:53 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.942 19:44:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.942 19:44:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.942 19:44:53 -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 ************************************ 00:06:24.942 START TEST thread 00:06:24.942 ************************************ 00:06:24.942 19:44:53 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.942 * Looking for test storage... 00:06:24.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:24.942 19:44:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.942 19:44:53 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:24.942 19:44:53 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.942 19:44:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 ************************************ 00:06:24.942 START TEST thread_poller_perf 00:06:24.942 ************************************ 00:06:24.942 19:44:53 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.942 [2024-07-24 19:44:53.514406] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:24.942 [2024-07-24 19:44:53.514527] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61128 ] 00:06:25.201 [2024-07-24 19:44:53.654749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.201 [2024-07-24 19:44:53.773693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.201 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.577 ====================================== 00:06:26.577 busy:2208655647 (cyc) 00:06:26.577 total_run_count: 315000 00:06:26.577 tsc_hz: 2200000000 (cyc) 00:06:26.577 ====================================== 00:06:26.577 poller_cost: 7011 (cyc), 3186 (nsec) 00:06:26.577 00:06:26.577 ************************************ 00:06:26.577 END TEST thread_poller_perf 00:06:26.577 ************************************ 00:06:26.577 real 0m1.387s 00:06:26.577 user 0m1.210s 00:06:26.577 sys 0m0.067s 00:06:26.577 19:44:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.577 19:44:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.577 19:44:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.577 19:44:54 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:26.577 19:44:54 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.577 19:44:54 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.577 ************************************ 00:06:26.577 START TEST thread_poller_perf 00:06:26.577 ************************************ 00:06:26.577 19:44:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.577 [2024-07-24 19:44:54.956483] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:26.577 [2024-07-24 19:44:54.956605] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61158 ] 00:06:26.577 [2024-07-24 19:44:55.093008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.577 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.577 [2024-07-24 19:44:55.205786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.952 ====================================== 00:06:27.952 busy:2202298897 (cyc) 00:06:27.952 total_run_count: 4328000 00:06:27.952 tsc_hz: 2200000000 (cyc) 00:06:27.952 ====================================== 00:06:27.952 poller_cost: 508 (cyc), 230 (nsec) 00:06:27.952 ************************************ 00:06:27.952 END TEST thread_poller_perf 00:06:27.952 ************************************ 00:06:27.952 00:06:27.952 real 0m1.352s 00:06:27.952 user 0m1.192s 00:06:27.952 sys 0m0.052s 00:06:27.952 19:44:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.952 19:44:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.952 19:44:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.952 ************************************ 00:06:27.952 END TEST thread 00:06:27.952 ************************************ 00:06:27.952 00:06:27.952 real 0m2.926s 00:06:27.952 user 0m2.472s 00:06:27.952 sys 0m0.230s 00:06:27.952 19:44:56 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.952 19:44:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.952 19:44:56 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:27.952 19:44:56 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.952 19:44:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.952 19:44:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.952 19:44:56 -- common/autotest_common.sh@10 -- # set +x 00:06:27.952 ************************************ 00:06:27.952 START TEST app_cmdline 00:06:27.952 ************************************ 00:06:27.952 19:44:56 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.952 * Looking for test storage... 00:06:27.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:27.952 19:44:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.952 19:44:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61237 00:06:27.952 19:44:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61237 00:06:27.952 19:44:56 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 61237 ']' 00:06:27.952 19:44:56 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.952 19:44:56 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.952 19:44:56 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.952 19:44:56 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.952 19:44:56 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.952 19:44:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.952 [2024-07-24 19:44:56.522949] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:27.952 [2024-07-24 19:44:56.523053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61237 ] 00:06:28.211 [2024-07-24 19:44:56.664281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.211 [2024-07-24 19:44:56.788759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.211 [2024-07-24 19:44:56.846344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.147 19:44:57 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.147 19:44:57 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:29.147 19:44:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:29.147 { 00:06:29.147 "version": "SPDK v24.09-pre git sha1 0c322284f", 00:06:29.147 "fields": { 00:06:29.147 "major": 24, 00:06:29.147 "minor": 9, 00:06:29.147 "patch": 0, 00:06:29.147 "suffix": "-pre", 00:06:29.147 "commit": "0c322284f" 00:06:29.147 } 00:06:29.147 } 00:06:29.147 19:44:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:29.147 19:44:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:29.148 19:44:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:29.148 19:44:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:29.148 19:44:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:29.148 19:44:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:29.148 19:44:57 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.148 19:44:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.148 19:44:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:29.148 19:44:57 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.407 19:44:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:29.407 19:44:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:29.407 19:44:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:29.407 19:44:57 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.407 request: 00:06:29.407 { 00:06:29.407 "method": "env_dpdk_get_mem_stats", 00:06:29.407 "req_id": 1 00:06:29.407 } 00:06:29.407 Got JSON-RPC error response 00:06:29.407 response: 00:06:29.407 { 00:06:29.407 "code": -32601, 00:06:29.407 "message": "Method not found" 00:06:29.407 } 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.407 19:44:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61237 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 61237 ']' 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 61237 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.407 19:44:58 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61237 00:06:29.666 killing process with pid 61237 00:06:29.666 19:44:58 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.666 19:44:58 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.666 19:44:58 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61237' 00:06:29.666 19:44:58 app_cmdline -- common/autotest_common.sh@969 -- # kill 61237 00:06:29.666 19:44:58 app_cmdline -- common/autotest_common.sh@974 -- # wait 61237 00:06:29.925 ************************************ 00:06:29.925 END TEST app_cmdline 00:06:29.925 ************************************ 00:06:29.925 00:06:29.925 real 0m2.086s 00:06:29.925 user 0m2.611s 00:06:29.925 sys 0m0.471s 00:06:29.925 19:44:58 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.925 19:44:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.925 19:44:58 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.925 19:44:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.925 19:44:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.925 19:44:58 -- common/autotest_common.sh@10 -- # set +x 00:06:29.925 ************************************ 00:06:29.925 START TEST version 00:06:29.925 ************************************ 00:06:29.925 19:44:58 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.925 * Looking for test storage... 00:06:30.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:30.184 19:44:58 version -- app/version.sh@17 -- # get_header_version major 00:06:30.184 19:44:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.184 19:44:58 version -- app/version.sh@14 -- # cut -f2 00:06:30.184 19:44:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.184 19:44:58 version -- app/version.sh@17 -- # major=24 00:06:30.184 19:44:58 version -- app/version.sh@18 -- # get_header_version minor 00:06:30.184 19:44:58 version -- app/version.sh@14 -- # cut -f2 00:06:30.184 19:44:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.184 19:44:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.184 19:44:58 version -- app/version.sh@18 -- # minor=9 00:06:30.184 19:44:58 version -- app/version.sh@19 -- # get_header_version patch 00:06:30.184 19:44:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.184 19:44:58 version -- app/version.sh@14 -- # cut -f2 00:06:30.184 19:44:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.184 19:44:58 version -- app/version.sh@19 -- # patch=0 00:06:30.184 19:44:58 version -- app/version.sh@20 -- # get_header_version suffix 00:06:30.184 19:44:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:30.184 19:44:58 version -- app/version.sh@14 -- # cut -f2 00:06:30.184 19:44:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:30.184 19:44:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:30.184 19:44:58 version -- app/version.sh@22 -- # version=24.9 00:06:30.184 19:44:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:30.184 19:44:58 version -- app/version.sh@28 -- # version=24.9rc0 00:06:30.184 19:44:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:30.184 19:44:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:30.184 19:44:58 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:30.184 19:44:58 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:30.184 ************************************ 00:06:30.184 END TEST version 00:06:30.184 ************************************ 00:06:30.184 00:06:30.184 real 0m0.151s 00:06:30.184 user 0m0.078s 00:06:30.184 sys 0m0.103s 00:06:30.184 19:44:58 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.184 19:44:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 19:44:58 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:30.184 19:44:58 -- spdk/autotest.sh@202 -- # uname -s 00:06:30.184 19:44:58 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:30.184 19:44:58 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:30.184 19:44:58 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:06:30.184 19:44:58 -- spdk/autotest.sh@209 -- # [[ 0 -eq 0 ]] 00:06:30.184 19:44:58 -- spdk/autotest.sh@210 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:30.184 19:44:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.184 19:44:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.184 19:44:58 -- common/autotest_common.sh@10 -- # set +x 00:06:30.184 ************************************ 00:06:30.184 START TEST spdk_dd 00:06:30.184 ************************************ 00:06:30.184 19:44:58 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:30.184 * Looking for test storage... 00:06:30.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:30.184 19:44:58 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.184 19:44:58 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.184 19:44:58 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.184 19:44:58 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.184 19:44:58 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.184 19:44:58 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.184 19:44:58 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.184 19:44:58 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:30.184 19:44:58 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.184 19:44:58 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:30.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:30.707 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:30.707 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:30.707 19:44:59 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:30.707 19:44:59 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:30.707 19:44:59 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:30.707 19:44:59 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.707 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:30.708 * spdk_dd linked to liburing 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:30.708 19:44:59 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:30.708 19:44:59 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:30.709 19:44:59 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:30.709 19:44:59 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:30.709 19:44:59 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:30.709 19:44:59 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:30.709 19:44:59 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:30.709 19:44:59 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:30.709 19:44:59 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:30.709 19:44:59 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:30.709 19:44:59 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.709 19:44:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:30.709 ************************************ 00:06:30.709 START TEST spdk_dd_basic_rw 00:06:30.709 ************************************ 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:30.709 * Looking for test storage... 00:06:30.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:30.709 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:30.971 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:30.971 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.972 ************************************ 00:06:30.972 START TEST dd_bs_lt_native_bs 00:06:30.972 ************************************ 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.972 19:44:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:30.972 { 00:06:30.972 "subsystems": [ 00:06:30.972 { 00:06:30.972 "subsystem": "bdev", 00:06:30.972 "config": [ 00:06:30.972 { 00:06:30.972 "params": { 00:06:30.972 "trtype": "pcie", 00:06:30.972 "traddr": "0000:00:10.0", 00:06:30.972 "name": "Nvme0" 00:06:30.972 }, 00:06:30.972 "method": "bdev_nvme_attach_controller" 00:06:30.972 }, 00:06:30.972 { 00:06:30.972 "method": "bdev_wait_for_examine" 00:06:30.972 } 00:06:30.972 ] 00:06:30.972 } 00:06:30.972 ] 00:06:30.972 } 00:06:30.972 [2024-07-24 19:44:59.612170] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:30.972 [2024-07-24 19:44:59.612278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61560 ] 00:06:31.231 [2024-07-24 19:44:59.752336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.231 [2024-07-24 19:44:59.851140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.489 [2024-07-24 19:44:59.909468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.489 [2024-07-24 19:45:00.017080] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:31.489 [2024-07-24 19:45:00.017164] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:31.489 [2024-07-24 19:45:00.139895] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.748 00:06:31.748 real 0m0.667s 00:06:31.748 user 0m0.449s 00:06:31.748 sys 0m0.173s 00:06:31.748 ************************************ 00:06:31.748 END TEST dd_bs_lt_native_bs 00:06:31.748 ************************************ 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.748 ************************************ 00:06:31.748 START TEST dd_rw 00:06:31.748 ************************************ 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:31.748 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:31.749 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.316 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:32.316 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:32.316 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.316 19:45:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.316 [2024-07-24 19:45:00.900887] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:32.316 [2024-07-24 19:45:00.900996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61591 ] 00:06:32.316 { 00:06:32.316 "subsystems": [ 00:06:32.316 { 00:06:32.316 "subsystem": "bdev", 00:06:32.317 "config": [ 00:06:32.317 { 00:06:32.317 "params": { 00:06:32.317 "trtype": "pcie", 00:06:32.317 "traddr": "0000:00:10.0", 00:06:32.317 "name": "Nvme0" 00:06:32.317 }, 00:06:32.317 "method": "bdev_nvme_attach_controller" 00:06:32.317 }, 00:06:32.317 { 00:06:32.317 "method": "bdev_wait_for_examine" 00:06:32.317 } 00:06:32.317 ] 00:06:32.317 } 00:06:32.317 ] 00:06:32.317 } 00:06:32.575 [2024-07-24 19:45:01.038760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.575 [2024-07-24 19:45:01.128347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.575 [2024-07-24 19:45:01.183118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.091  Copying: 60/60 [kB] (average 29 MBps) 00:06:33.091 00:06:33.091 19:45:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:33.091 19:45:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:33.091 19:45:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.091 19:45:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.091 [2024-07-24 19:45:01.565193] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:33.092 [2024-07-24 19:45:01.565299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61604 ] 00:06:33.092 { 00:06:33.092 "subsystems": [ 00:06:33.092 { 00:06:33.092 "subsystem": "bdev", 00:06:33.092 "config": [ 00:06:33.092 { 00:06:33.092 "params": { 00:06:33.092 "trtype": "pcie", 00:06:33.092 "traddr": "0000:00:10.0", 00:06:33.092 "name": "Nvme0" 00:06:33.092 }, 00:06:33.092 "method": "bdev_nvme_attach_controller" 00:06:33.092 }, 00:06:33.092 { 00:06:33.092 "method": "bdev_wait_for_examine" 00:06:33.092 } 00:06:33.092 ] 00:06:33.092 } 00:06:33.092 ] 00:06:33.092 } 00:06:33.092 [2024-07-24 19:45:01.704107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.350 [2024-07-24 19:45:01.799576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.350 [2024-07-24 19:45:01.855940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.608  Copying: 60/60 [kB] (average 19 MBps) 00:06:33.608 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.608 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.608 [2024-07-24 19:45:02.232340] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:33.608 [2024-07-24 19:45:02.232454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61620 ] 00:06:33.608 { 00:06:33.608 "subsystems": [ 00:06:33.608 { 00:06:33.608 "subsystem": "bdev", 00:06:33.608 "config": [ 00:06:33.608 { 00:06:33.608 "params": { 00:06:33.608 "trtype": "pcie", 00:06:33.608 "traddr": "0000:00:10.0", 00:06:33.608 "name": "Nvme0" 00:06:33.608 }, 00:06:33.608 "method": "bdev_nvme_attach_controller" 00:06:33.608 }, 00:06:33.608 { 00:06:33.608 "method": "bdev_wait_for_examine" 00:06:33.608 } 00:06:33.608 ] 00:06:33.608 } 00:06:33.608 ] 00:06:33.608 } 00:06:33.867 [2024-07-24 19:45:02.369636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.867 [2024-07-24 19:45:02.485455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.125 [2024-07-24 19:45:02.542513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.383  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:34.383 00:06:34.383 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:34.383 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:34.383 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:34.383 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:34.383 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:34.383 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:34.383 19:45:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.950 19:45:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:34.950 19:45:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:34.950 19:45:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:34.950 19:45:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.950 [2024-07-24 19:45:03.535391] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:34.950 [2024-07-24 19:45:03.535491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61639 ] 00:06:34.950 { 00:06:34.950 "subsystems": [ 00:06:34.950 { 00:06:34.950 "subsystem": "bdev", 00:06:34.950 "config": [ 00:06:34.950 { 00:06:34.950 "params": { 00:06:34.950 "trtype": "pcie", 00:06:34.950 "traddr": "0000:00:10.0", 00:06:34.950 "name": "Nvme0" 00:06:34.950 }, 00:06:34.950 "method": "bdev_nvme_attach_controller" 00:06:34.950 }, 00:06:34.950 { 00:06:34.950 "method": "bdev_wait_for_examine" 00:06:34.950 } 00:06:34.950 ] 00:06:34.950 } 00:06:34.950 ] 00:06:34.950 } 00:06:35.208 [2024-07-24 19:45:03.672907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.208 [2024-07-24 19:45:03.771986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.208 [2024-07-24 19:45:03.829825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.726  Copying: 60/60 [kB] (average 58 MBps) 00:06:35.726 00:06:35.726 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:35.726 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:35.726 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.726 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.726 { 00:06:35.726 "subsystems": [ 00:06:35.726 { 00:06:35.726 "subsystem": "bdev", 00:06:35.726 "config": [ 00:06:35.726 { 00:06:35.726 "params": { 00:06:35.726 "trtype": "pcie", 00:06:35.726 "traddr": "0000:00:10.0", 00:06:35.726 "name": "Nvme0" 00:06:35.726 }, 00:06:35.726 "method": "bdev_nvme_attach_controller" 00:06:35.726 }, 00:06:35.726 { 00:06:35.726 "method": "bdev_wait_for_examine" 00:06:35.726 } 00:06:35.726 ] 00:06:35.726 } 00:06:35.726 ] 00:06:35.726 } 00:06:35.726 [2024-07-24 19:45:04.237033] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:35.726 [2024-07-24 19:45:04.237164] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61658 ] 00:06:35.726 [2024-07-24 19:45:04.374886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.984 [2024-07-24 19:45:04.483339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.984 [2024-07-24 19:45:04.539538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:36.242  Copying: 60/60 [kB] (average 58 MBps) 00:06:36.242 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.242 19:45:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.501 [2024-07-24 19:45:04.928459] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:36.501 [2024-07-24 19:45:04.928558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61679 ] 00:06:36.501 { 00:06:36.501 "subsystems": [ 00:06:36.501 { 00:06:36.501 "subsystem": "bdev", 00:06:36.501 "config": [ 00:06:36.501 { 00:06:36.501 "params": { 00:06:36.501 "trtype": "pcie", 00:06:36.501 "traddr": "0000:00:10.0", 00:06:36.501 "name": "Nvme0" 00:06:36.501 }, 00:06:36.501 "method": "bdev_nvme_attach_controller" 00:06:36.501 }, 00:06:36.501 { 00:06:36.501 "method": "bdev_wait_for_examine" 00:06:36.501 } 00:06:36.501 ] 00:06:36.501 } 00:06:36.501 ] 00:06:36.501 } 00:06:36.501 [2024-07-24 19:45:05.066356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.501 [2024-07-24 19:45:05.160903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.759 [2024-07-24 19:45:05.215627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.017  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:37.017 00:06:37.017 19:45:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:37.017 19:45:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:37.017 19:45:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:37.017 19:45:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:37.017 19:45:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:37.017 19:45:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:37.017 19:45:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:37.017 19:45:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.582 19:45:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:37.582 19:45:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:37.582 19:45:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.582 19:45:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.582 [2024-07-24 19:45:06.156668] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:37.582 [2024-07-24 19:45:06.156825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61698 ] 00:06:37.582 { 00:06:37.582 "subsystems": [ 00:06:37.582 { 00:06:37.582 "subsystem": "bdev", 00:06:37.582 "config": [ 00:06:37.582 { 00:06:37.582 "params": { 00:06:37.582 "trtype": "pcie", 00:06:37.582 "traddr": "0000:00:10.0", 00:06:37.582 "name": "Nvme0" 00:06:37.582 }, 00:06:37.582 "method": "bdev_nvme_attach_controller" 00:06:37.582 }, 00:06:37.582 { 00:06:37.582 "method": "bdev_wait_for_examine" 00:06:37.582 } 00:06:37.582 ] 00:06:37.582 } 00:06:37.582 ] 00:06:37.582 } 00:06:37.840 [2024-07-24 19:45:06.294593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.840 [2024-07-24 19:45:06.383462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.840 [2024-07-24 19:45:06.441440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.098  Copying: 56/56 [kB] (average 54 MBps) 00:06:38.098 00:06:38.373 19:45:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:38.373 19:45:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:38.373 19:45:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.373 19:45:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.373 { 00:06:38.373 "subsystems": [ 00:06:38.373 { 00:06:38.373 "subsystem": "bdev", 00:06:38.373 "config": [ 00:06:38.373 { 00:06:38.373 "params": { 00:06:38.373 "trtype": "pcie", 00:06:38.373 "traddr": "0000:00:10.0", 00:06:38.373 "name": "Nvme0" 00:06:38.373 }, 00:06:38.373 "method": "bdev_nvme_attach_controller" 00:06:38.373 }, 00:06:38.373 { 00:06:38.373 "method": "bdev_wait_for_examine" 00:06:38.373 } 00:06:38.373 ] 00:06:38.373 } 00:06:38.373 ] 00:06:38.373 } 00:06:38.373 [2024-07-24 19:45:06.827345] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:38.373 [2024-07-24 19:45:06.827484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61717 ] 00:06:38.373 [2024-07-24 19:45:06.969870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.659 [2024-07-24 19:45:07.066019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.659 [2024-07-24 19:45:07.123117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.917  Copying: 56/56 [kB] (average 27 MBps) 00:06:38.917 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.917 19:45:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.917 { 00:06:38.917 "subsystems": [ 00:06:38.917 { 00:06:38.917 "subsystem": "bdev", 00:06:38.917 "config": [ 00:06:38.917 { 00:06:38.917 "params": { 00:06:38.917 "trtype": "pcie", 00:06:38.917 "traddr": "0000:00:10.0", 00:06:38.917 "name": "Nvme0" 00:06:38.917 }, 00:06:38.917 "method": "bdev_nvme_attach_controller" 00:06:38.917 }, 00:06:38.917 { 00:06:38.917 "method": "bdev_wait_for_examine" 00:06:38.917 } 00:06:38.917 ] 00:06:38.917 } 00:06:38.917 ] 00:06:38.917 } 00:06:38.917 [2024-07-24 19:45:07.506870] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:38.917 [2024-07-24 19:45:07.507031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61727 ] 00:06:39.175 [2024-07-24 19:45:07.653346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.175 [2024-07-24 19:45:07.781173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.175 [2024-07-24 19:45:07.839450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.693  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:39.693 00:06:39.693 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:39.693 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:39.693 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:39.693 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:39.693 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:39.693 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:39.693 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.260 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:40.260 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:40.260 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.260 19:45:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.260 [2024-07-24 19:45:08.777732] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:40.260 [2024-07-24 19:45:08.778321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61752 ] 00:06:40.260 { 00:06:40.260 "subsystems": [ 00:06:40.260 { 00:06:40.260 "subsystem": "bdev", 00:06:40.260 "config": [ 00:06:40.260 { 00:06:40.260 "params": { 00:06:40.260 "trtype": "pcie", 00:06:40.260 "traddr": "0000:00:10.0", 00:06:40.260 "name": "Nvme0" 00:06:40.260 }, 00:06:40.260 "method": "bdev_nvme_attach_controller" 00:06:40.260 }, 00:06:40.260 { 00:06:40.260 "method": "bdev_wait_for_examine" 00:06:40.260 } 00:06:40.260 ] 00:06:40.260 } 00:06:40.260 ] 00:06:40.260 } 00:06:40.260 [2024-07-24 19:45:08.920734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.519 [2024-07-24 19:45:09.015953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.519 [2024-07-24 19:45:09.072216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.778  Copying: 56/56 [kB] (average 54 MBps) 00:06:40.778 00:06:40.778 19:45:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:40.778 19:45:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:40.778 19:45:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.778 19:45:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.037 [2024-07-24 19:45:09.462832] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:41.037 [2024-07-24 19:45:09.462958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61765 ] 00:06:41.037 { 00:06:41.037 "subsystems": [ 00:06:41.037 { 00:06:41.037 "subsystem": "bdev", 00:06:41.037 "config": [ 00:06:41.037 { 00:06:41.037 "params": { 00:06:41.037 "trtype": "pcie", 00:06:41.037 "traddr": "0000:00:10.0", 00:06:41.037 "name": "Nvme0" 00:06:41.037 }, 00:06:41.037 "method": "bdev_nvme_attach_controller" 00:06:41.037 }, 00:06:41.037 { 00:06:41.037 "method": "bdev_wait_for_examine" 00:06:41.037 } 00:06:41.037 ] 00:06:41.037 } 00:06:41.037 ] 00:06:41.037 } 00:06:41.037 [2024-07-24 19:45:09.603599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.295 [2024-07-24 19:45:09.706742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.295 [2024-07-24 19:45:09.763893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.554  Copying: 56/56 [kB] (average 54 MBps) 00:06:41.554 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.554 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.554 { 00:06:41.554 "subsystems": [ 00:06:41.554 { 00:06:41.554 "subsystem": "bdev", 00:06:41.554 "config": [ 00:06:41.554 { 00:06:41.554 "params": { 00:06:41.554 "trtype": "pcie", 00:06:41.554 "traddr": "0000:00:10.0", 00:06:41.554 "name": "Nvme0" 00:06:41.554 }, 00:06:41.554 "method": "bdev_nvme_attach_controller" 00:06:41.554 }, 00:06:41.554 { 00:06:41.554 "method": "bdev_wait_for_examine" 00:06:41.554 } 00:06:41.554 ] 00:06:41.554 } 00:06:41.554 ] 00:06:41.554 } 00:06:41.554 [2024-07-24 19:45:10.149849] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:41.554 [2024-07-24 19:45:10.149952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61786 ] 00:06:41.812 [2024-07-24 19:45:10.291301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.812 [2024-07-24 19:45:10.400545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.812 [2024-07-24 19:45:10.456701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.329  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:42.329 00:06:42.329 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:42.329 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:42.329 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:42.329 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:42.329 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:42.329 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:42.329 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:42.329 19:45:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.896 19:45:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:42.896 19:45:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:42.896 19:45:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.896 19:45:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.896 [2024-07-24 19:45:11.310184] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:42.896 [2024-07-24 19:45:11.310291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61805 ] 00:06:42.896 { 00:06:42.896 "subsystems": [ 00:06:42.896 { 00:06:42.896 "subsystem": "bdev", 00:06:42.896 "config": [ 00:06:42.896 { 00:06:42.896 "params": { 00:06:42.896 "trtype": "pcie", 00:06:42.896 "traddr": "0000:00:10.0", 00:06:42.896 "name": "Nvme0" 00:06:42.896 }, 00:06:42.896 "method": "bdev_nvme_attach_controller" 00:06:42.896 }, 00:06:42.896 { 00:06:42.896 "method": "bdev_wait_for_examine" 00:06:42.896 } 00:06:42.896 ] 00:06:42.896 } 00:06:42.896 ] 00:06:42.896 } 00:06:42.896 [2024-07-24 19:45:11.450945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.896 [2024-07-24 19:45:11.561478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.155 [2024-07-24 19:45:11.618334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.415  Copying: 48/48 [kB] (average 46 MBps) 00:06:43.415 00:06:43.415 19:45:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:43.415 19:45:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:43.415 19:45:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.415 19:45:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.415 [2024-07-24 19:45:11.986820] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:43.415 [2024-07-24 19:45:11.986954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61823 ] 00:06:43.415 { 00:06:43.415 "subsystems": [ 00:06:43.415 { 00:06:43.415 "subsystem": "bdev", 00:06:43.415 "config": [ 00:06:43.415 { 00:06:43.415 "params": { 00:06:43.415 "trtype": "pcie", 00:06:43.415 "traddr": "0000:00:10.0", 00:06:43.415 "name": "Nvme0" 00:06:43.415 }, 00:06:43.415 "method": "bdev_nvme_attach_controller" 00:06:43.415 }, 00:06:43.415 { 00:06:43.415 "method": "bdev_wait_for_examine" 00:06:43.415 } 00:06:43.415 ] 00:06:43.415 } 00:06:43.415 ] 00:06:43.415 } 00:06:43.674 [2024-07-24 19:45:12.122483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.674 [2024-07-24 19:45:12.214423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.674 [2024-07-24 19:45:12.267710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.932  Copying: 48/48 [kB] (average 46 MBps) 00:06:43.932 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.932 19:45:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.190 [2024-07-24 19:45:12.635896] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:44.190 [2024-07-24 19:45:12.636008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61834 ] 00:06:44.190 { 00:06:44.190 "subsystems": [ 00:06:44.190 { 00:06:44.190 "subsystem": "bdev", 00:06:44.190 "config": [ 00:06:44.190 { 00:06:44.190 "params": { 00:06:44.190 "trtype": "pcie", 00:06:44.190 "traddr": "0000:00:10.0", 00:06:44.190 "name": "Nvme0" 00:06:44.190 }, 00:06:44.190 "method": "bdev_nvme_attach_controller" 00:06:44.190 }, 00:06:44.190 { 00:06:44.190 "method": "bdev_wait_for_examine" 00:06:44.190 } 00:06:44.190 ] 00:06:44.190 } 00:06:44.190 ] 00:06:44.190 } 00:06:44.190 [2024-07-24 19:45:12.771794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.448 [2024-07-24 19:45:12.880162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.448 [2024-07-24 19:45:12.934634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.705  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:44.705 00:06:44.705 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:44.705 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:44.705 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:44.705 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:44.705 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:44.705 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:44.705 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.271 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:45.271 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:45.271 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.271 19:45:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.271 [2024-07-24 19:45:13.798605] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:45.271 [2024-07-24 19:45:13.798803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61853 ] 00:06:45.271 { 00:06:45.271 "subsystems": [ 00:06:45.271 { 00:06:45.271 "subsystem": "bdev", 00:06:45.271 "config": [ 00:06:45.271 { 00:06:45.271 "params": { 00:06:45.271 "trtype": "pcie", 00:06:45.271 "traddr": "0000:00:10.0", 00:06:45.271 "name": "Nvme0" 00:06:45.271 }, 00:06:45.271 "method": "bdev_nvme_attach_controller" 00:06:45.271 }, 00:06:45.271 { 00:06:45.271 "method": "bdev_wait_for_examine" 00:06:45.271 } 00:06:45.271 ] 00:06:45.271 } 00:06:45.271 ] 00:06:45.271 } 00:06:45.529 [2024-07-24 19:45:13.943205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.529 [2024-07-24 19:45:14.046352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.529 [2024-07-24 19:45:14.099138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.787  Copying: 48/48 [kB] (average 46 MBps) 00:06:45.787 00:06:45.787 19:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:45.787 19:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:45.787 19:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.787 19:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.045 [2024-07-24 19:45:14.494897] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:46.045 [2024-07-24 19:45:14.495051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61872 ] 00:06:46.045 { 00:06:46.045 "subsystems": [ 00:06:46.045 { 00:06:46.045 "subsystem": "bdev", 00:06:46.045 "config": [ 00:06:46.045 { 00:06:46.045 "params": { 00:06:46.045 "trtype": "pcie", 00:06:46.045 "traddr": "0000:00:10.0", 00:06:46.045 "name": "Nvme0" 00:06:46.045 }, 00:06:46.045 "method": "bdev_nvme_attach_controller" 00:06:46.045 }, 00:06:46.045 { 00:06:46.045 "method": "bdev_wait_for_examine" 00:06:46.045 } 00:06:46.045 ] 00:06:46.045 } 00:06:46.045 ] 00:06:46.045 } 00:06:46.045 [2024-07-24 19:45:14.637994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.304 [2024-07-24 19:45:14.750042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.304 [2024-07-24 19:45:14.803748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.563  Copying: 48/48 [kB] (average 46 MBps) 00:06:46.563 00:06:46.563 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.563 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:46.563 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:46.563 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:46.563 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:46.563 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:46.563 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:46.563 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:46.564 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:46.564 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.564 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.564 [2024-07-24 19:45:15.199048] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:46.564 [2024-07-24 19:45:15.199193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61893 ] 00:06:46.564 { 00:06:46.564 "subsystems": [ 00:06:46.564 { 00:06:46.564 "subsystem": "bdev", 00:06:46.564 "config": [ 00:06:46.564 { 00:06:46.564 "params": { 00:06:46.564 "trtype": "pcie", 00:06:46.564 "traddr": "0000:00:10.0", 00:06:46.564 "name": "Nvme0" 00:06:46.564 }, 00:06:46.564 "method": "bdev_nvme_attach_controller" 00:06:46.564 }, 00:06:46.564 { 00:06:46.564 "method": "bdev_wait_for_examine" 00:06:46.564 } 00:06:46.564 ] 00:06:46.564 } 00:06:46.564 ] 00:06:46.564 } 00:06:46.822 [2024-07-24 19:45:15.343652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.822 [2024-07-24 19:45:15.462500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.081 [2024-07-24 19:45:15.518688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.341  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:47.341 00:06:47.341 00:06:47.341 real 0m15.564s 00:06:47.341 user 0m11.536s 00:06:47.341 sys 0m5.609s 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.341 ************************************ 00:06:47.341 END TEST dd_rw 00:06:47.341 ************************************ 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.341 ************************************ 00:06:47.341 START TEST dd_rw_offset 00:06:47.341 ************************************ 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=rep0e6ijqarrehrk42h2i8rrojducrgwepmu7tbv36d6dacj7nz4ag1210j4m1ionauhslfpr9t5ni8tbw0pkm1bq6oxj9b2fxjrttu6n28p1w38nyvoveg6pgynau3ah8a4fhk4r7bye13y1z6htlh7uzqnzd9tq95duh9g96819n4tixytv1ozf9c4l7nbvosl0l2e5755m92yggrxpeam64ifrbbz3dqaq45zowhzdfodf1dhbiswq52ldndb7a358bh8q1yl97urag9924354p637yism1wmd429ax9b8vmjycxsq9e27sg1yxrnok21bl5taqrncy8w55ttxb0l61xbn8kcd2a751qz0zgxp92drrvj34m7zxxwq6xqzis0tt2dusd27rd9wqylr630jnxkzi58xkkeqghj9f1nghf8kqs5gp0ic68wn7piwj4y4b2waq6cys3tnsbefvqdla6yfyygl2hbuvltrvdqmkaqscl9rt5dhmr5xe3nosyv62n8g0iqlnvo8jw6ylgl8pvnwom96v3od33nb3raz6cc7kokc3exlelizls96ry86qw0c9unwz5wn8h8tg6b9jeq2dtfw7f5bvcsa0yjdt3eq9tn31ntbzvr35l87k00i6m5mtrcx70u5vw10aneidu74bhb9ixzhgeod38sm7s9hhj4dncv996jmkfq8e8mzcby8gkorl5utj15smsphtttgh12h1hb8hxcrdty8o4vitf9b6ig7kchtdv9nurk0u1jpijgnf1z39xk065rxhxrky8ci7laob7ze9dtv96h9e8fdehl1ori8dy2jcuvo3hwdrenz270ocoul35edpcaxsx8moeoik6wc2cwwbhv6qvn1xhmz8gktp7qotusqbw8wa4xzhdj2qw0wpqphq4ult6g1bj2rrby4ysji0h6zfe8it6pwhgf9acvfstd6vkpevcem9eacuy1mpmltwdjjpapoqbjjy3t5x5h85yo4z1drn85dlsg5rtl2ojnit10bht2lo58h4sn16olaj8ra3y35o3judxvt4mfy7drv5q47dog1u9s0zvjtzob698oplws21dexu995hqapf8glgi8m84atpij45yqim0r1k4xl8jxqq1qwowb0lt1gw85uxue4t22ofrvn9y4rl0t4pp1wdt12etyqxnmom1kfesu4femh3k1ob8qcu5dcpa8euri85t2t5u06qrofcbfpmqw278dnz24ankdh7568w8w2a4tx7hbd06k552gab16yxntq76yvdph1lpwiyh7xwig4hbjm9lmogjnrjdwi4s31kqcy08fgsdznjjvoizntrcy2u3gpe89pj8cso7vf36mpn3qj6ucmmfasyxso1ubtnkqw5v1wwfbgrwk5b22vt8o06fnzexuf6z8og55d0ojqvn3ptgky9nj2fzjmn1hms0zy6g4ksapzu29pi5dslc27y77yevlk93owvzvaj643xuguarmvva1ppgmrce82pgxgy8uqfeparw678bsaubnookbj96rfns85vabqb6rff0gkpap6wdbr3ey2fixutv38lpoq196dm3by54xowddx8wtzckvtxokz21983metgfhsg57e2vo0v647qodggvrat821e7yi4j4l4nwxlqk68qseqlnjfri6e9voa1exjubbu3waip2c6n32q413x1lvzcz2rt14qja30dml9jjtq5ehucofby0ecwdcqgi666vupfnnb2bz6kkeortrd8i8ltrq63lbgvn7js5qytw9hgmhowy0m27pk6iggosqwckx0jndfcxxyfkskl4xnqcaqh5l3j8ej0gkfumbsuicjv8b81yq40f73yqb0s3cdwx6s68gxnc2b7cl6ocr9lahyapwru8p7yrqx1fpuwaaqdljybmu4mwlvpld4rjc98d7pmo7je9bwbcsj4j3dpk09m93yadc410uncsb5qbstyfucni8f8uzsueb8upjzbww9drg6v51d91570ekw0mx9w6qjst0z3p5gvg5iiqfzpb88m9th7j7j1vti5sasr2nc13uqr9hfm56fcdzl5w82bv6ymqmat7hougko980r7yixfs5mxhn6o11alm5xydnac9hjt3eispjq4av0u569g16yfo0hajl9x87y23dl7sbgaviqefznqarbuy671z7n5uojdvf2f3bk05u7zbg5hj36rh4zu8wlz1woe30dfo8vlsjdh7ibn50sj6tchcphl0ocxrx8hh4nhhex6uokkn2tpqoite8i98n1vdaa9bs1uso8gkpzpg4nesz11f7pslmvhs5g9067431pl8m4855dt8igyexgpq4ekf85w49c5vdzaj0xjqcoslary1c0wvb4kgf51azcxun2671j8ap3iglt4vawm5yf0rzppkx611xgh9dh9w3hbwwxfvds79bly92d21ed6a0ekz2c428cn60xzfj76paqk4s2oj4kc1962y5pwldnbq3fn1e2kcb75b4r31j4ailkjzyf0f01xe876591ldulpzomiyreho76s3w0qnatjxwzzyvg8ffowrk9hocn1hee6r8b30nz2hbyc6d091ii8xx35809awrzqb5h9s172ewd28bpzry0toeoiwbh9z37ykomeejf0atzd7wnswu6xs3b2kaunejrh7tgnjae3cz1jn4genv6skmz8livb2t3dfh3t3iq32hx1ifp5myxagyw55xmjeile4c6axsj0ma7a90xzfikuxutgt9sz7c4s2n5686se8adpf1qzmmlghwlcwzilzipnzse5tleli7vy4hteuor9mg4m0uzttlf9iq7gfdos431rp233m3glzn2kvl64v677rx895o9ssjizipu6al35mu1jvte5va3au6qew4hk02z4tdrlek4asjw4t8l7d1bjethy6wv3v3baewxhz4ewmcm02x37q4jvod2iploth91blz3i0u8ni4b2v1fb4vsjfq9yc8dboxbk8i2gddxf47hpis1q630w2i6lgye1nxxjssre4gt5fdvxxo0b4fqdda12lrnor1hol7e9mpctm1fi5ncnzh61n1ss9ca69jyt18dbz3c53npfd1tljn4aavbvgxdyqo50sa3v2kjafdkex8fm2u0wo8ylq8kn6atht5w8yzzugzfoiixlam97p4wbnx85dj6zxaca1u29ovz2y4b4x937sl0kjzuyf4huuxyf7knf6pys48pd1azuss8y00207e5d4mb6guf4y00y4azjw28c4t4gpfgbj2wony4o79nia8tcsgs8g9ws8qzw2lcoxz6w69d26q18mayjkthv491etaupd3nlm001a830wc1d99twtwetozcfragerlu0jmzylv2z5pd1hkz2dvswycpr7f1mjcohv8myo3to1rb5jtgojvg11hs6ybrkiqllef25779x3pydqb1svf7n3oeka6970bxz6at39lv2mpm7mldecz280afz8rm5xs5egfodc8faax216c8wnuhllbosz8ky3gnkey58kua3pcl19xwudmqu2km49x9lfnbjww64etyumg1l97jrx38ldtikb9uph4sxzhi9lk6p0sxr3a2mmlm1g9qyn1gzkz47uotju6812ilg15s4xnk0v5g06pcosq06pp9sdu6vgd6vh1fxww4vokqvrsa54brzsonn75utgcucit9v8kl5utcxpsbd8gt78sd7hlc1e1q79torcpgvvy7fgoi74d2km1iubcas1jm6c51gav1qr10lfw817zeqyxpm01ph7n35uwxml8z1uz9cuv3y8y5e1jxze7iy5qbuoexa6ilc97muh047bj0345dedtbf0fmvk85ul8z0w1mk64k188xdfp54t0a5c2pkabl4l653wc2crgvzbo61gdxv14gjs392ztiyp9k7pd6fnjryr64kno6p9eygs2klzs64p9bl27ppb3qxisr9ubf4q4x25rq1qydcf551t6oqjihcsoygxeiaj2xksblkzz3dakhaqvy8oxbn2sko8 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:47.341 19:45:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:47.341 { 00:06:47.341 "subsystems": [ 00:06:47.341 { 00:06:47.341 "subsystem": "bdev", 00:06:47.341 "config": [ 00:06:47.341 { 00:06:47.341 "params": { 00:06:47.341 "trtype": "pcie", 00:06:47.341 "traddr": "0000:00:10.0", 00:06:47.341 "name": "Nvme0" 00:06:47.341 }, 00:06:47.341 "method": "bdev_nvme_attach_controller" 00:06:47.341 }, 00:06:47.341 { 00:06:47.341 "method": "bdev_wait_for_examine" 00:06:47.341 } 00:06:47.341 ] 00:06:47.341 } 00:06:47.341 ] 00:06:47.341 } 00:06:47.341 [2024-07-24 19:45:15.989152] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:47.341 [2024-07-24 19:45:15.989244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61924 ] 00:06:47.600 [2024-07-24 19:45:16.128843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.600 [2024-07-24 19:45:16.243393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.863 [2024-07-24 19:45:16.298726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.122  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:48.122 00:06:48.122 19:45:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:48.122 19:45:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:48.122 19:45:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:48.122 19:45:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:48.122 [2024-07-24 19:45:16.678836] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:48.122 [2024-07-24 19:45:16.678932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61937 ] 00:06:48.122 { 00:06:48.122 "subsystems": [ 00:06:48.122 { 00:06:48.122 "subsystem": "bdev", 00:06:48.122 "config": [ 00:06:48.122 { 00:06:48.122 "params": { 00:06:48.122 "trtype": "pcie", 00:06:48.122 "traddr": "0000:00:10.0", 00:06:48.122 "name": "Nvme0" 00:06:48.122 }, 00:06:48.122 "method": "bdev_nvme_attach_controller" 00:06:48.122 }, 00:06:48.122 { 00:06:48.122 "method": "bdev_wait_for_examine" 00:06:48.122 } 00:06:48.122 ] 00:06:48.122 } 00:06:48.122 ] 00:06:48.122 } 00:06:48.381 [2024-07-24 19:45:16.820375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.381 [2024-07-24 19:45:16.934568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.381 [2024-07-24 19:45:16.990963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.639  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:48.639 00:06:48.898 19:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ rep0e6ijqarrehrk42h2i8rrojducrgwepmu7tbv36d6dacj7nz4ag1210j4m1ionauhslfpr9t5ni8tbw0pkm1bq6oxj9b2fxjrttu6n28p1w38nyvoveg6pgynau3ah8a4fhk4r7bye13y1z6htlh7uzqnzd9tq95duh9g96819n4tixytv1ozf9c4l7nbvosl0l2e5755m92yggrxpeam64ifrbbz3dqaq45zowhzdfodf1dhbiswq52ldndb7a358bh8q1yl97urag9924354p637yism1wmd429ax9b8vmjycxsq9e27sg1yxrnok21bl5taqrncy8w55ttxb0l61xbn8kcd2a751qz0zgxp92drrvj34m7zxxwq6xqzis0tt2dusd27rd9wqylr630jnxkzi58xkkeqghj9f1nghf8kqs5gp0ic68wn7piwj4y4b2waq6cys3tnsbefvqdla6yfyygl2hbuvltrvdqmkaqscl9rt5dhmr5xe3nosyv62n8g0iqlnvo8jw6ylgl8pvnwom96v3od33nb3raz6cc7kokc3exlelizls96ry86qw0c9unwz5wn8h8tg6b9jeq2dtfw7f5bvcsa0yjdt3eq9tn31ntbzvr35l87k00i6m5mtrcx70u5vw10aneidu74bhb9ixzhgeod38sm7s9hhj4dncv996jmkfq8e8mzcby8gkorl5utj15smsphtttgh12h1hb8hxcrdty8o4vitf9b6ig7kchtdv9nurk0u1jpijgnf1z39xk065rxhxrky8ci7laob7ze9dtv96h9e8fdehl1ori8dy2jcuvo3hwdrenz270ocoul35edpcaxsx8moeoik6wc2cwwbhv6qvn1xhmz8gktp7qotusqbw8wa4xzhdj2qw0wpqphq4ult6g1bj2rrby4ysji0h6zfe8it6pwhgf9acvfstd6vkpevcem9eacuy1mpmltwdjjpapoqbjjy3t5x5h85yo4z1drn85dlsg5rtl2ojnit10bht2lo58h4sn16olaj8ra3y35o3judxvt4mfy7drv5q47dog1u9s0zvjtzob698oplws21dexu995hqapf8glgi8m84atpij45yqim0r1k4xl8jxqq1qwowb0lt1gw85uxue4t22ofrvn9y4rl0t4pp1wdt12etyqxnmom1kfesu4femh3k1ob8qcu5dcpa8euri85t2t5u06qrofcbfpmqw278dnz24ankdh7568w8w2a4tx7hbd06k552gab16yxntq76yvdph1lpwiyh7xwig4hbjm9lmogjnrjdwi4s31kqcy08fgsdznjjvoizntrcy2u3gpe89pj8cso7vf36mpn3qj6ucmmfasyxso1ubtnkqw5v1wwfbgrwk5b22vt8o06fnzexuf6z8og55d0ojqvn3ptgky9nj2fzjmn1hms0zy6g4ksapzu29pi5dslc27y77yevlk93owvzvaj643xuguarmvva1ppgmrce82pgxgy8uqfeparw678bsaubnookbj96rfns85vabqb6rff0gkpap6wdbr3ey2fixutv38lpoq196dm3by54xowddx8wtzckvtxokz21983metgfhsg57e2vo0v647qodggvrat821e7yi4j4l4nwxlqk68qseqlnjfri6e9voa1exjubbu3waip2c6n32q413x1lvzcz2rt14qja30dml9jjtq5ehucofby0ecwdcqgi666vupfnnb2bz6kkeortrd8i8ltrq63lbgvn7js5qytw9hgmhowy0m27pk6iggosqwckx0jndfcxxyfkskl4xnqcaqh5l3j8ej0gkfumbsuicjv8b81yq40f73yqb0s3cdwx6s68gxnc2b7cl6ocr9lahyapwru8p7yrqx1fpuwaaqdljybmu4mwlvpld4rjc98d7pmo7je9bwbcsj4j3dpk09m93yadc410uncsb5qbstyfucni8f8uzsueb8upjzbww9drg6v51d91570ekw0mx9w6qjst0z3p5gvg5iiqfzpb88m9th7j7j1vti5sasr2nc13uqr9hfm56fcdzl5w82bv6ymqmat7hougko980r7yixfs5mxhn6o11alm5xydnac9hjt3eispjq4av0u569g16yfo0hajl9x87y23dl7sbgaviqefznqarbuy671z7n5uojdvf2f3bk05u7zbg5hj36rh4zu8wlz1woe30dfo8vlsjdh7ibn50sj6tchcphl0ocxrx8hh4nhhex6uokkn2tpqoite8i98n1vdaa9bs1uso8gkpzpg4nesz11f7pslmvhs5g9067431pl8m4855dt8igyexgpq4ekf85w49c5vdzaj0xjqcoslary1c0wvb4kgf51azcxun2671j8ap3iglt4vawm5yf0rzppkx611xgh9dh9w3hbwwxfvds79bly92d21ed6a0ekz2c428cn60xzfj76paqk4s2oj4kc1962y5pwldnbq3fn1e2kcb75b4r31j4ailkjzyf0f01xe876591ldulpzomiyreho76s3w0qnatjxwzzyvg8ffowrk9hocn1hee6r8b30nz2hbyc6d091ii8xx35809awrzqb5h9s172ewd28bpzry0toeoiwbh9z37ykomeejf0atzd7wnswu6xs3b2kaunejrh7tgnjae3cz1jn4genv6skmz8livb2t3dfh3t3iq32hx1ifp5myxagyw55xmjeile4c6axsj0ma7a90xzfikuxutgt9sz7c4s2n5686se8adpf1qzmmlghwlcwzilzipnzse5tleli7vy4hteuor9mg4m0uzttlf9iq7gfdos431rp233m3glzn2kvl64v677rx895o9ssjizipu6al35mu1jvte5va3au6qew4hk02z4tdrlek4asjw4t8l7d1bjethy6wv3v3baewxhz4ewmcm02x37q4jvod2iploth91blz3i0u8ni4b2v1fb4vsjfq9yc8dboxbk8i2gddxf47hpis1q630w2i6lgye1nxxjssre4gt5fdvxxo0b4fqdda12lrnor1hol7e9mpctm1fi5ncnzh61n1ss9ca69jyt18dbz3c53npfd1tljn4aavbvgxdyqo50sa3v2kjafdkex8fm2u0wo8ylq8kn6atht5w8yzzugzfoiixlam97p4wbnx85dj6zxaca1u29ovz2y4b4x937sl0kjzuyf4huuxyf7knf6pys48pd1azuss8y00207e5d4mb6guf4y00y4azjw28c4t4gpfgbj2wony4o79nia8tcsgs8g9ws8qzw2lcoxz6w69d26q18mayjkthv491etaupd3nlm001a830wc1d99twtwetozcfragerlu0jmzylv2z5pd1hkz2dvswycpr7f1mjcohv8myo3to1rb5jtgojvg11hs6ybrkiqllef25779x3pydqb1svf7n3oeka6970bxz6at39lv2mpm7mldecz280afz8rm5xs5egfodc8faax216c8wnuhllbosz8ky3gnkey58kua3pcl19xwudmqu2km49x9lfnbjww64etyumg1l97jrx38ldtikb9uph4sxzhi9lk6p0sxr3a2mmlm1g9qyn1gzkz47uotju6812ilg15s4xnk0v5g06pcosq06pp9sdu6vgd6vh1fxww4vokqvrsa54brzsonn75utgcucit9v8kl5utcxpsbd8gt78sd7hlc1e1q79torcpgvvy7fgoi74d2km1iubcas1jm6c51gav1qr10lfw817zeqyxpm01ph7n35uwxml8z1uz9cuv3y8y5e1jxze7iy5qbuoexa6ilc97muh047bj0345dedtbf0fmvk85ul8z0w1mk64k188xdfp54t0a5c2pkabl4l653wc2crgvzbo61gdxv14gjs392ztiyp9k7pd6fnjryr64kno6p9eygs2klzs64p9bl27ppb3qxisr9ubf4q4x25rq1qydcf551t6oqjihcsoygxeiaj2xksblkzz3dakhaqvy8oxbn2sko8 == \r\e\p\0\e\6\i\j\q\a\r\r\e\h\r\k\4\2\h\2\i\8\r\r\o\j\d\u\c\r\g\w\e\p\m\u\7\t\b\v\3\6\d\6\d\a\c\j\7\n\z\4\a\g\1\2\1\0\j\4\m\1\i\o\n\a\u\h\s\l\f\p\r\9\t\5\n\i\8\t\b\w\0\p\k\m\1\b\q\6\o\x\j\9\b\2\f\x\j\r\t\t\u\6\n\2\8\p\1\w\3\8\n\y\v\o\v\e\g\6\p\g\y\n\a\u\3\a\h\8\a\4\f\h\k\4\r\7\b\y\e\1\3\y\1\z\6\h\t\l\h\7\u\z\q\n\z\d\9\t\q\9\5\d\u\h\9\g\9\6\8\1\9\n\4\t\i\x\y\t\v\1\o\z\f\9\c\4\l\7\n\b\v\o\s\l\0\l\2\e\5\7\5\5\m\9\2\y\g\g\r\x\p\e\a\m\6\4\i\f\r\b\b\z\3\d\q\a\q\4\5\z\o\w\h\z\d\f\o\d\f\1\d\h\b\i\s\w\q\5\2\l\d\n\d\b\7\a\3\5\8\b\h\8\q\1\y\l\9\7\u\r\a\g\9\9\2\4\3\5\4\p\6\3\7\y\i\s\m\1\w\m\d\4\2\9\a\x\9\b\8\v\m\j\y\c\x\s\q\9\e\2\7\s\g\1\y\x\r\n\o\k\2\1\b\l\5\t\a\q\r\n\c\y\8\w\5\5\t\t\x\b\0\l\6\1\x\b\n\8\k\c\d\2\a\7\5\1\q\z\0\z\g\x\p\9\2\d\r\r\v\j\3\4\m\7\z\x\x\w\q\6\x\q\z\i\s\0\t\t\2\d\u\s\d\2\7\r\d\9\w\q\y\l\r\6\3\0\j\n\x\k\z\i\5\8\x\k\k\e\q\g\h\j\9\f\1\n\g\h\f\8\k\q\s\5\g\p\0\i\c\6\8\w\n\7\p\i\w\j\4\y\4\b\2\w\a\q\6\c\y\s\3\t\n\s\b\e\f\v\q\d\l\a\6\y\f\y\y\g\l\2\h\b\u\v\l\t\r\v\d\q\m\k\a\q\s\c\l\9\r\t\5\d\h\m\r\5\x\e\3\n\o\s\y\v\6\2\n\8\g\0\i\q\l\n\v\o\8\j\w\6\y\l\g\l\8\p\v\n\w\o\m\9\6\v\3\o\d\3\3\n\b\3\r\a\z\6\c\c\7\k\o\k\c\3\e\x\l\e\l\i\z\l\s\9\6\r\y\8\6\q\w\0\c\9\u\n\w\z\5\w\n\8\h\8\t\g\6\b\9\j\e\q\2\d\t\f\w\7\f\5\b\v\c\s\a\0\y\j\d\t\3\e\q\9\t\n\3\1\n\t\b\z\v\r\3\5\l\8\7\k\0\0\i\6\m\5\m\t\r\c\x\7\0\u\5\v\w\1\0\a\n\e\i\d\u\7\4\b\h\b\9\i\x\z\h\g\e\o\d\3\8\s\m\7\s\9\h\h\j\4\d\n\c\v\9\9\6\j\m\k\f\q\8\e\8\m\z\c\b\y\8\g\k\o\r\l\5\u\t\j\1\5\s\m\s\p\h\t\t\t\g\h\1\2\h\1\h\b\8\h\x\c\r\d\t\y\8\o\4\v\i\t\f\9\b\6\i\g\7\k\c\h\t\d\v\9\n\u\r\k\0\u\1\j\p\i\j\g\n\f\1\z\3\9\x\k\0\6\5\r\x\h\x\r\k\y\8\c\i\7\l\a\o\b\7\z\e\9\d\t\v\9\6\h\9\e\8\f\d\e\h\l\1\o\r\i\8\d\y\2\j\c\u\v\o\3\h\w\d\r\e\n\z\2\7\0\o\c\o\u\l\3\5\e\d\p\c\a\x\s\x\8\m\o\e\o\i\k\6\w\c\2\c\w\w\b\h\v\6\q\v\n\1\x\h\m\z\8\g\k\t\p\7\q\o\t\u\s\q\b\w\8\w\a\4\x\z\h\d\j\2\q\w\0\w\p\q\p\h\q\4\u\l\t\6\g\1\b\j\2\r\r\b\y\4\y\s\j\i\0\h\6\z\f\e\8\i\t\6\p\w\h\g\f\9\a\c\v\f\s\t\d\6\v\k\p\e\v\c\e\m\9\e\a\c\u\y\1\m\p\m\l\t\w\d\j\j\p\a\p\o\q\b\j\j\y\3\t\5\x\5\h\8\5\y\o\4\z\1\d\r\n\8\5\d\l\s\g\5\r\t\l\2\o\j\n\i\t\1\0\b\h\t\2\l\o\5\8\h\4\s\n\1\6\o\l\a\j\8\r\a\3\y\3\5\o\3\j\u\d\x\v\t\4\m\f\y\7\d\r\v\5\q\4\7\d\o\g\1\u\9\s\0\z\v\j\t\z\o\b\6\9\8\o\p\l\w\s\2\1\d\e\x\u\9\9\5\h\q\a\p\f\8\g\l\g\i\8\m\8\4\a\t\p\i\j\4\5\y\q\i\m\0\r\1\k\4\x\l\8\j\x\q\q\1\q\w\o\w\b\0\l\t\1\g\w\8\5\u\x\u\e\4\t\2\2\o\f\r\v\n\9\y\4\r\l\0\t\4\p\p\1\w\d\t\1\2\e\t\y\q\x\n\m\o\m\1\k\f\e\s\u\4\f\e\m\h\3\k\1\o\b\8\q\c\u\5\d\c\p\a\8\e\u\r\i\8\5\t\2\t\5\u\0\6\q\r\o\f\c\b\f\p\m\q\w\2\7\8\d\n\z\2\4\a\n\k\d\h\7\5\6\8\w\8\w\2\a\4\t\x\7\h\b\d\0\6\k\5\5\2\g\a\b\1\6\y\x\n\t\q\7\6\y\v\d\p\h\1\l\p\w\i\y\h\7\x\w\i\g\4\h\b\j\m\9\l\m\o\g\j\n\r\j\d\w\i\4\s\3\1\k\q\c\y\0\8\f\g\s\d\z\n\j\j\v\o\i\z\n\t\r\c\y\2\u\3\g\p\e\8\9\p\j\8\c\s\o\7\v\f\3\6\m\p\n\3\q\j\6\u\c\m\m\f\a\s\y\x\s\o\1\u\b\t\n\k\q\w\5\v\1\w\w\f\b\g\r\w\k\5\b\2\2\v\t\8\o\0\6\f\n\z\e\x\u\f\6\z\8\o\g\5\5\d\0\o\j\q\v\n\3\p\t\g\k\y\9\n\j\2\f\z\j\m\n\1\h\m\s\0\z\y\6\g\4\k\s\a\p\z\u\2\9\p\i\5\d\s\l\c\2\7\y\7\7\y\e\v\l\k\9\3\o\w\v\z\v\a\j\6\4\3\x\u\g\u\a\r\m\v\v\a\1\p\p\g\m\r\c\e\8\2\p\g\x\g\y\8\u\q\f\e\p\a\r\w\6\7\8\b\s\a\u\b\n\o\o\k\b\j\9\6\r\f\n\s\8\5\v\a\b\q\b\6\r\f\f\0\g\k\p\a\p\6\w\d\b\r\3\e\y\2\f\i\x\u\t\v\3\8\l\p\o\q\1\9\6\d\m\3\b\y\5\4\x\o\w\d\d\x\8\w\t\z\c\k\v\t\x\o\k\z\2\1\9\8\3\m\e\t\g\f\h\s\g\5\7\e\2\v\o\0\v\6\4\7\q\o\d\g\g\v\r\a\t\8\2\1\e\7\y\i\4\j\4\l\4\n\w\x\l\q\k\6\8\q\s\e\q\l\n\j\f\r\i\6\e\9\v\o\a\1\e\x\j\u\b\b\u\3\w\a\i\p\2\c\6\n\3\2\q\4\1\3\x\1\l\v\z\c\z\2\r\t\1\4\q\j\a\3\0\d\m\l\9\j\j\t\q\5\e\h\u\c\o\f\b\y\0\e\c\w\d\c\q\g\i\6\6\6\v\u\p\f\n\n\b\2\b\z\6\k\k\e\o\r\t\r\d\8\i\8\l\t\r\q\6\3\l\b\g\v\n\7\j\s\5\q\y\t\w\9\h\g\m\h\o\w\y\0\m\2\7\p\k\6\i\g\g\o\s\q\w\c\k\x\0\j\n\d\f\c\x\x\y\f\k\s\k\l\4\x\n\q\c\a\q\h\5\l\3\j\8\e\j\0\g\k\f\u\m\b\s\u\i\c\j\v\8\b\8\1\y\q\4\0\f\7\3\y\q\b\0\s\3\c\d\w\x\6\s\6\8\g\x\n\c\2\b\7\c\l\6\o\c\r\9\l\a\h\y\a\p\w\r\u\8\p\7\y\r\q\x\1\f\p\u\w\a\a\q\d\l\j\y\b\m\u\4\m\w\l\v\p\l\d\4\r\j\c\9\8\d\7\p\m\o\7\j\e\9\b\w\b\c\s\j\4\j\3\d\p\k\0\9\m\9\3\y\a\d\c\4\1\0\u\n\c\s\b\5\q\b\s\t\y\f\u\c\n\i\8\f\8\u\z\s\u\e\b\8\u\p\j\z\b\w\w\9\d\r\g\6\v\5\1\d\9\1\5\7\0\e\k\w\0\m\x\9\w\6\q\j\s\t\0\z\3\p\5\g\v\g\5\i\i\q\f\z\p\b\8\8\m\9\t\h\7\j\7\j\1\v\t\i\5\s\a\s\r\2\n\c\1\3\u\q\r\9\h\f\m\5\6\f\c\d\z\l\5\w\8\2\b\v\6\y\m\q\m\a\t\7\h\o\u\g\k\o\9\8\0\r\7\y\i\x\f\s\5\m\x\h\n\6\o\1\1\a\l\m\5\x\y\d\n\a\c\9\h\j\t\3\e\i\s\p\j\q\4\a\v\0\u\5\6\9\g\1\6\y\f\o\0\h\a\j\l\9\x\8\7\y\2\3\d\l\7\s\b\g\a\v\i\q\e\f\z\n\q\a\r\b\u\y\6\7\1\z\7\n\5\u\o\j\d\v\f\2\f\3\b\k\0\5\u\7\z\b\g\5\h\j\3\6\r\h\4\z\u\8\w\l\z\1\w\o\e\3\0\d\f\o\8\v\l\s\j\d\h\7\i\b\n\5\0\s\j\6\t\c\h\c\p\h\l\0\o\c\x\r\x\8\h\h\4\n\h\h\e\x\6\u\o\k\k\n\2\t\p\q\o\i\t\e\8\i\9\8\n\1\v\d\a\a\9\b\s\1\u\s\o\8\g\k\p\z\p\g\4\n\e\s\z\1\1\f\7\p\s\l\m\v\h\s\5\g\9\0\6\7\4\3\1\p\l\8\m\4\8\5\5\d\t\8\i\g\y\e\x\g\p\q\4\e\k\f\8\5\w\4\9\c\5\v\d\z\a\j\0\x\j\q\c\o\s\l\a\r\y\1\c\0\w\v\b\4\k\g\f\5\1\a\z\c\x\u\n\2\6\7\1\j\8\a\p\3\i\g\l\t\4\v\a\w\m\5\y\f\0\r\z\p\p\k\x\6\1\1\x\g\h\9\d\h\9\w\3\h\b\w\w\x\f\v\d\s\7\9\b\l\y\9\2\d\2\1\e\d\6\a\0\e\k\z\2\c\4\2\8\c\n\6\0\x\z\f\j\7\6\p\a\q\k\4\s\2\o\j\4\k\c\1\9\6\2\y\5\p\w\l\d\n\b\q\3\f\n\1\e\2\k\c\b\7\5\b\4\r\3\1\j\4\a\i\l\k\j\z\y\f\0\f\0\1\x\e\8\7\6\5\9\1\l\d\u\l\p\z\o\m\i\y\r\e\h\o\7\6\s\3\w\0\q\n\a\t\j\x\w\z\z\y\v\g\8\f\f\o\w\r\k\9\h\o\c\n\1\h\e\e\6\r\8\b\3\0\n\z\2\h\b\y\c\6\d\0\9\1\i\i\8\x\x\3\5\8\0\9\a\w\r\z\q\b\5\h\9\s\1\7\2\e\w\d\2\8\b\p\z\r\y\0\t\o\e\o\i\w\b\h\9\z\3\7\y\k\o\m\e\e\j\f\0\a\t\z\d\7\w\n\s\w\u\6\x\s\3\b\2\k\a\u\n\e\j\r\h\7\t\g\n\j\a\e\3\c\z\1\j\n\4\g\e\n\v\6\s\k\m\z\8\l\i\v\b\2\t\3\d\f\h\3\t\3\i\q\3\2\h\x\1\i\f\p\5\m\y\x\a\g\y\w\5\5\x\m\j\e\i\l\e\4\c\6\a\x\s\j\0\m\a\7\a\9\0\x\z\f\i\k\u\x\u\t\g\t\9\s\z\7\c\4\s\2\n\5\6\8\6\s\e\8\a\d\p\f\1\q\z\m\m\l\g\h\w\l\c\w\z\i\l\z\i\p\n\z\s\e\5\t\l\e\l\i\7\v\y\4\h\t\e\u\o\r\9\m\g\4\m\0\u\z\t\t\l\f\9\i\q\7\g\f\d\o\s\4\3\1\r\p\2\3\3\m\3\g\l\z\n\2\k\v\l\6\4\v\6\7\7\r\x\8\9\5\o\9\s\s\j\i\z\i\p\u\6\a\l\3\5\m\u\1\j\v\t\e\5\v\a\3\a\u\6\q\e\w\4\h\k\0\2\z\4\t\d\r\l\e\k\4\a\s\j\w\4\t\8\l\7\d\1\b\j\e\t\h\y\6\w\v\3\v\3\b\a\e\w\x\h\z\4\e\w\m\c\m\0\2\x\3\7\q\4\j\v\o\d\2\i\p\l\o\t\h\9\1\b\l\z\3\i\0\u\8\n\i\4\b\2\v\1\f\b\4\v\s\j\f\q\9\y\c\8\d\b\o\x\b\k\8\i\2\g\d\d\x\f\4\7\h\p\i\s\1\q\6\3\0\w\2\i\6\l\g\y\e\1\n\x\x\j\s\s\r\e\4\g\t\5\f\d\v\x\x\o\0\b\4\f\q\d\d\a\1\2\l\r\n\o\r\1\h\o\l\7\e\9\m\p\c\t\m\1\f\i\5\n\c\n\z\h\6\1\n\1\s\s\9\c\a\6\9\j\y\t\1\8\d\b\z\3\c\5\3\n\p\f\d\1\t\l\j\n\4\a\a\v\b\v\g\x\d\y\q\o\5\0\s\a\3\v\2\k\j\a\f\d\k\e\x\8\f\m\2\u\0\w\o\8\y\l\q\8\k\n\6\a\t\h\t\5\w\8\y\z\z\u\g\z\f\o\i\i\x\l\a\m\9\7\p\4\w\b\n\x\8\5\d\j\6\z\x\a\c\a\1\u\2\9\o\v\z\2\y\4\b\4\x\9\3\7\s\l\0\k\j\z\u\y\f\4\h\u\u\x\y\f\7\k\n\f\6\p\y\s\4\8\p\d\1\a\z\u\s\s\8\y\0\0\2\0\7\e\5\d\4\m\b\6\g\u\f\4\y\0\0\y\4\a\z\j\w\2\8\c\4\t\4\g\p\f\g\b\j\2\w\o\n\y\4\o\7\9\n\i\a\8\t\c\s\g\s\8\g\9\w\s\8\q\z\w\2\l\c\o\x\z\6\w\6\9\d\2\6\q\1\8\m\a\y\j\k\t\h\v\4\9\1\e\t\a\u\p\d\3\n\l\m\0\0\1\a\8\3\0\w\c\1\d\9\9\t\w\t\w\e\t\o\z\c\f\r\a\g\e\r\l\u\0\j\m\z\y\l\v\2\z\5\p\d\1\h\k\z\2\d\v\s\w\y\c\p\r\7\f\1\m\j\c\o\h\v\8\m\y\o\3\t\o\1\r\b\5\j\t\g\o\j\v\g\1\1\h\s\6\y\b\r\k\i\q\l\l\e\f\2\5\7\7\9\x\3\p\y\d\q\b\1\s\v\f\7\n\3\o\e\k\a\6\9\7\0\b\x\z\6\a\t\3\9\l\v\2\m\p\m\7\m\l\d\e\c\z\2\8\0\a\f\z\8\r\m\5\x\s\5\e\g\f\o\d\c\8\f\a\a\x\2\1\6\c\8\w\n\u\h\l\l\b\o\s\z\8\k\y\3\g\n\k\e\y\5\8\k\u\a\3\p\c\l\1\9\x\w\u\d\m\q\u\2\k\m\4\9\x\9\l\f\n\b\j\w\w\6\4\e\t\y\u\m\g\1\l\9\7\j\r\x\3\8\l\d\t\i\k\b\9\u\p\h\4\s\x\z\h\i\9\l\k\6\p\0\s\x\r\3\a\2\m\m\l\m\1\g\9\q\y\n\1\g\z\k\z\4\7\u\o\t\j\u\6\8\1\2\i\l\g\1\5\s\4\x\n\k\0\v\5\g\0\6\p\c\o\s\q\0\6\p\p\9\s\d\u\6\v\g\d\6\v\h\1\f\x\w\w\4\v\o\k\q\v\r\s\a\5\4\b\r\z\s\o\n\n\7\5\u\t\g\c\u\c\i\t\9\v\8\k\l\5\u\t\c\x\p\s\b\d\8\g\t\7\8\s\d\7\h\l\c\1\e\1\q\7\9\t\o\r\c\p\g\v\v\y\7\f\g\o\i\7\4\d\2\k\m\1\i\u\b\c\a\s\1\j\m\6\c\5\1\g\a\v\1\q\r\1\0\l\f\w\8\1\7\z\e\q\y\x\p\m\0\1\p\h\7\n\3\5\u\w\x\m\l\8\z\1\u\z\9\c\u\v\3\y\8\y\5\e\1\j\x\z\e\7\i\y\5\q\b\u\o\e\x\a\6\i\l\c\9\7\m\u\h\0\4\7\b\j\0\3\4\5\d\e\d\t\b\f\0\f\m\v\k\8\5\u\l\8\z\0\w\1\m\k\6\4\k\1\8\8\x\d\f\p\5\4\t\0\a\5\c\2\p\k\a\b\l\4\l\6\5\3\w\c\2\c\r\g\v\z\b\o\6\1\g\d\x\v\1\4\g\j\s\3\9\2\z\t\i\y\p\9\k\7\p\d\6\f\n\j\r\y\r\6\4\k\n\o\6\p\9\e\y\g\s\2\k\l\z\s\6\4\p\9\b\l\2\7\p\p\b\3\q\x\i\s\r\9\u\b\f\4\q\4\x\2\5\r\q\1\q\y\d\c\f\5\5\1\t\6\o\q\j\i\h\c\s\o\y\g\x\e\i\a\j\2\x\k\s\b\l\k\z\z\3\d\a\k\h\a\q\v\y\8\o\x\b\n\2\s\k\o\8 ]] 00:06:48.899 ************************************ 00:06:48.899 END TEST dd_rw_offset 00:06:48.899 ************************************ 00:06:48.899 00:06:48.899 real 0m1.426s 00:06:48.899 user 0m1.009s 00:06:48.899 sys 0m0.578s 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.899 19:45:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.899 [2024-07-24 19:45:17.402127] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:48.899 [2024-07-24 19:45:17.402226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61972 ] 00:06:48.899 { 00:06:48.899 "subsystems": [ 00:06:48.899 { 00:06:48.899 "subsystem": "bdev", 00:06:48.899 "config": [ 00:06:48.899 { 00:06:48.899 "params": { 00:06:48.899 "trtype": "pcie", 00:06:48.899 "traddr": "0000:00:10.0", 00:06:48.899 "name": "Nvme0" 00:06:48.899 }, 00:06:48.899 "method": "bdev_nvme_attach_controller" 00:06:48.899 }, 00:06:48.899 { 00:06:48.899 "method": "bdev_wait_for_examine" 00:06:48.899 } 00:06:48.899 ] 00:06:48.899 } 00:06:48.899 ] 00:06:48.899 } 00:06:48.899 [2024-07-24 19:45:17.539115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.157 [2024-07-24 19:45:17.625420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.157 [2024-07-24 19:45:17.678859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.415  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:49.415 00:06:49.415 19:45:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.415 00:06:49.415 real 0m18.736s 00:06:49.415 user 0m13.576s 00:06:49.415 sys 0m6.827s 00:06:49.415 19:45:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.415 19:45:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.415 ************************************ 00:06:49.415 END TEST spdk_dd_basic_rw 00:06:49.415 ************************************ 00:06:49.415 19:45:18 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:49.415 19:45:18 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.415 19:45:18 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.415 19:45:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.415 ************************************ 00:06:49.415 START TEST spdk_dd_posix 00:06:49.415 ************************************ 00:06:49.415 19:45:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:49.674 * Looking for test storage... 00:06:49.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:49.674 * First test run, liburing in use 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:49.674 ************************************ 00:06:49.674 START TEST dd_flag_append 00:06:49.674 ************************************ 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=3mmmrtedfdunt81kk9ueso9ni3s3rb1l 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=9wtt712sz143r5uopxhaloolejrf2ita 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 3mmmrtedfdunt81kk9ueso9ni3s3rb1l 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 9wtt712sz143r5uopxhaloolejrf2ita 00:06:49.674 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:49.674 [2024-07-24 19:45:18.184688] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:49.674 [2024-07-24 19:45:18.184799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62032 ] 00:06:49.674 [2024-07-24 19:45:18.317068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.932 [2024-07-24 19:45:18.428157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.932 [2024-07-24 19:45:18.481115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.190  Copying: 32/32 [B] (average 31 kBps) 00:06:50.190 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 9wtt712sz143r5uopxhaloolejrf2ita3mmmrtedfdunt81kk9ueso9ni3s3rb1l == \9\w\t\t\7\1\2\s\z\1\4\3\r\5\u\o\p\x\h\a\l\o\o\l\e\j\r\f\2\i\t\a\3\m\m\m\r\t\e\d\f\d\u\n\t\8\1\k\k\9\u\e\s\o\9\n\i\3\s\3\r\b\1\l ]] 00:06:50.190 00:06:50.190 real 0m0.572s 00:06:50.190 user 0m0.325s 00:06:50.190 sys 0m0.256s 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:50.190 ************************************ 00:06:50.190 END TEST dd_flag_append 00:06:50.190 ************************************ 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:50.190 ************************************ 00:06:50.190 START TEST dd_flag_directory 00:06:50.190 ************************************ 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.190 19:45:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:50.190 [2024-07-24 19:45:18.808602] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:50.190 [2024-07-24 19:45:18.808694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62060 ] 00:06:50.448 [2024-07-24 19:45:18.945224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.448 [2024-07-24 19:45:19.052314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.448 [2024-07-24 19:45:19.105297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.705 [2024-07-24 19:45:19.137508] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:50.705 [2024-07-24 19:45:19.137586] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:50.705 [2024-07-24 19:45:19.137601] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.705 [2024-07-24 19:45:19.247304] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.705 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.706 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:50.706 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:50.706 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:50.962 [2024-07-24 19:45:19.395290] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:50.962 [2024-07-24 19:45:19.395385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62075 ] 00:06:50.962 [2024-07-24 19:45:19.533176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.219 [2024-07-24 19:45:19.639278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.219 [2024-07-24 19:45:19.692075] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.219 [2024-07-24 19:45:19.724613] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.219 [2024-07-24 19:45:19.724669] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.219 [2024-07-24 19:45:19.724684] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.219 [2024-07-24 19:45:19.833473] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:51.477 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:51.477 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.477 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:51.477 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:51.477 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:51.477 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.477 00:06:51.477 real 0m1.173s 00:06:51.477 user 0m0.684s 00:06:51.477 sys 0m0.278s 00:06:51.477 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.477 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:51.477 ************************************ 00:06:51.477 END TEST dd_flag_directory 00:06:51.477 ************************************ 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.478 ************************************ 00:06:51.478 START TEST dd_flag_nofollow 00:06:51.478 ************************************ 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.478 19:45:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.478 [2024-07-24 19:45:20.030656] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:51.478 [2024-07-24 19:45:20.030786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62104 ] 00:06:51.735 [2024-07-24 19:45:20.170296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.735 [2024-07-24 19:45:20.276908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.735 [2024-07-24 19:45:20.330822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.735 [2024-07-24 19:45:20.365011] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:51.735 [2024-07-24 19:45:20.365088] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:51.735 [2024-07-24 19:45:20.365121] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.997 [2024-07-24 19:45:20.478841] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.997 19:45:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:51.997 [2024-07-24 19:45:20.630989] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:51.997 [2024-07-24 19:45:20.631094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62113 ] 00:06:52.255 [2024-07-24 19:45:20.766144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.255 [2024-07-24 19:45:20.871527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.512 [2024-07-24 19:45:20.924912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.512 [2024-07-24 19:45:20.959164] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:52.512 [2024-07-24 19:45:20.959207] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:52.512 [2024-07-24 19:45:20.959223] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.512 [2024-07-24 19:45:21.075590] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:52.512 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.769 [2024-07-24 19:45:21.230012] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:52.769 [2024-07-24 19:45:21.230096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62121 ] 00:06:52.769 [2024-07-24 19:45:21.360425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.027 [2024-07-24 19:45:21.496863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.027 [2024-07-24 19:45:21.550197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.284  Copying: 512/512 [B] (average 500 kBps) 00:06:53.284 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ i3zkdf1rvgwu1mm00c3azdf22c4izepia0zkr4ldpg82nl2phqw8q4gomq4tabqjc1bsbbptcc46mu3blvx4jng3gj56u78ttbue8x1xeplnfnq0ovfu3gv303tyvsel4vh9gh8feyqag7ddsuxk3c160po7if0tz3gqeswfeewkxtn5osgd3ggy00pyg0kzyr8uavpemke7guik4ef7uiiyiddxqhp1hzvfa8esywt2v09zyujhrtnd49buvo7z4jyput5mwzfvi1x2n3xqiyyqx6gk67ydpp4plggkbkn41wk2ucgniympmdqeze6g62xcw9ici2iej4ndadtb4nn2kn338p4ss1s4bdu6b82ooxbztcedaw5qa3b3bpz1kw5eyo24cqhhw9qe7kzj0vtcb64igf5i5ududyvs5rmb0hc86zjysa3w5uzvmzpa21gg9ri7u2y5w6rwr8syzk8p06bld2z10yxec8dju9dfsaju6mn07lvu63jhzt3d == \i\3\z\k\d\f\1\r\v\g\w\u\1\m\m\0\0\c\3\a\z\d\f\2\2\c\4\i\z\e\p\i\a\0\z\k\r\4\l\d\p\g\8\2\n\l\2\p\h\q\w\8\q\4\g\o\m\q\4\t\a\b\q\j\c\1\b\s\b\b\p\t\c\c\4\6\m\u\3\b\l\v\x\4\j\n\g\3\g\j\5\6\u\7\8\t\t\b\u\e\8\x\1\x\e\p\l\n\f\n\q\0\o\v\f\u\3\g\v\3\0\3\t\y\v\s\e\l\4\v\h\9\g\h\8\f\e\y\q\a\g\7\d\d\s\u\x\k\3\c\1\6\0\p\o\7\i\f\0\t\z\3\g\q\e\s\w\f\e\e\w\k\x\t\n\5\o\s\g\d\3\g\g\y\0\0\p\y\g\0\k\z\y\r\8\u\a\v\p\e\m\k\e\7\g\u\i\k\4\e\f\7\u\i\i\y\i\d\d\x\q\h\p\1\h\z\v\f\a\8\e\s\y\w\t\2\v\0\9\z\y\u\j\h\r\t\n\d\4\9\b\u\v\o\7\z\4\j\y\p\u\t\5\m\w\z\f\v\i\1\x\2\n\3\x\q\i\y\y\q\x\6\g\k\6\7\y\d\p\p\4\p\l\g\g\k\b\k\n\4\1\w\k\2\u\c\g\n\i\y\m\p\m\d\q\e\z\e\6\g\6\2\x\c\w\9\i\c\i\2\i\e\j\4\n\d\a\d\t\b\4\n\n\2\k\n\3\3\8\p\4\s\s\1\s\4\b\d\u\6\b\8\2\o\o\x\b\z\t\c\e\d\a\w\5\q\a\3\b\3\b\p\z\1\k\w\5\e\y\o\2\4\c\q\h\h\w\9\q\e\7\k\z\j\0\v\t\c\b\6\4\i\g\f\5\i\5\u\d\u\d\y\v\s\5\r\m\b\0\h\c\8\6\z\j\y\s\a\3\w\5\u\z\v\m\z\p\a\2\1\g\g\9\r\i\7\u\2\y\5\w\6\r\w\r\8\s\y\z\k\8\p\0\6\b\l\d\2\z\1\0\y\x\e\c\8\d\j\u\9\d\f\s\a\j\u\6\m\n\0\7\l\v\u\6\3\j\h\z\t\3\d ]] 00:06:53.284 00:06:53.284 real 0m1.832s 00:06:53.284 user 0m1.052s 00:06:53.284 sys 0m0.585s 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:53.284 ************************************ 00:06:53.284 END TEST dd_flag_nofollow 00:06:53.284 ************************************ 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:53.284 ************************************ 00:06:53.284 START TEST dd_flag_noatime 00:06:53.284 ************************************ 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:53.284 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:53.285 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721850321 00:06:53.285 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.285 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721850321 00:06:53.285 19:45:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:54.217 19:45:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.475 [2024-07-24 19:45:22.928024] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:54.475 [2024-07-24 19:45:22.928155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62165 ] 00:06:54.475 [2024-07-24 19:45:23.070791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.734 [2024-07-24 19:45:23.209045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.734 [2024-07-24 19:45:23.269057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.993  Copying: 512/512 [B] (average 500 kBps) 00:06:54.993 00:06:54.993 19:45:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.993 19:45:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721850321 )) 00:06:54.993 19:45:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.993 19:45:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721850321 )) 00:06:54.993 19:45:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.993 [2024-07-24 19:45:23.592027] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:54.993 [2024-07-24 19:45:23.592147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62184 ] 00:06:55.251 [2024-07-24 19:45:23.732003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.251 [2024-07-24 19:45:23.870753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.508 [2024-07-24 19:45:23.929183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.767  Copying: 512/512 [B] (average 500 kBps) 00:06:55.767 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721850323 )) 00:06:55.767 00:06:55.767 real 0m2.342s 00:06:55.767 user 0m0.788s 00:06:55.767 sys 0m0.597s 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:55.767 ************************************ 00:06:55.767 END TEST dd_flag_noatime 00:06:55.767 ************************************ 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:55.767 ************************************ 00:06:55.767 START TEST dd_flags_misc 00:06:55.767 ************************************ 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.767 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:55.767 [2024-07-24 19:45:24.308626] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:55.767 [2024-07-24 19:45:24.308767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ] 00:06:56.027 [2024-07-24 19:45:24.446917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.027 [2024-07-24 19:45:24.561373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.027 [2024-07-24 19:45:24.614669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.285  Copying: 512/512 [B] (average 500 kBps) 00:06:56.285 00:06:56.285 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ toecte2mnm5zr3v6eg2oup6vpqwrb3x1o2w279qqxqnr4486j49o4gmqx8d8yzvtdg9s6nvhpgwujme09o7vvs7ciw10w32jgzqdpiemnzvubdruh73omyu730zx81l7ir9v7sz754ue2zap2ajw9uh4jkn6pyl9pe0a8md19ywffe9bcpo8hkbnho987k7b3jyhkfvvuphq51aof87kx61jquqmc9yttpanoecpt5t6zjxkxs2uroic4t350iggeh8jm46533rvwzo5bryupmyfx194iynxaiz7z7hr0i03s5184nq7mldej0p30m31nbgwn9j1px1pnron20uxvij739oi8413pg4csj101pq03om73w6s7pg7v4l40or4eac4e6d24astzidu43hgu7ieqpx5bbvhrhcdde6cy68bf419awqrp39kskpazoio2syagtdktv4my1fpzskhbmy8jjllikbwdzln5t2ajvolni0e49krnkaiwj484h8o == \t\o\e\c\t\e\2\m\n\m\5\z\r\3\v\6\e\g\2\o\u\p\6\v\p\q\w\r\b\3\x\1\o\2\w\2\7\9\q\q\x\q\n\r\4\4\8\6\j\4\9\o\4\g\m\q\x\8\d\8\y\z\v\t\d\g\9\s\6\n\v\h\p\g\w\u\j\m\e\0\9\o\7\v\v\s\7\c\i\w\1\0\w\3\2\j\g\z\q\d\p\i\e\m\n\z\v\u\b\d\r\u\h\7\3\o\m\y\u\7\3\0\z\x\8\1\l\7\i\r\9\v\7\s\z\7\5\4\u\e\2\z\a\p\2\a\j\w\9\u\h\4\j\k\n\6\p\y\l\9\p\e\0\a\8\m\d\1\9\y\w\f\f\e\9\b\c\p\o\8\h\k\b\n\h\o\9\8\7\k\7\b\3\j\y\h\k\f\v\v\u\p\h\q\5\1\a\o\f\8\7\k\x\6\1\j\q\u\q\m\c\9\y\t\t\p\a\n\o\e\c\p\t\5\t\6\z\j\x\k\x\s\2\u\r\o\i\c\4\t\3\5\0\i\g\g\e\h\8\j\m\4\6\5\3\3\r\v\w\z\o\5\b\r\y\u\p\m\y\f\x\1\9\4\i\y\n\x\a\i\z\7\z\7\h\r\0\i\0\3\s\5\1\8\4\n\q\7\m\l\d\e\j\0\p\3\0\m\3\1\n\b\g\w\n\9\j\1\p\x\1\p\n\r\o\n\2\0\u\x\v\i\j\7\3\9\o\i\8\4\1\3\p\g\4\c\s\j\1\0\1\p\q\0\3\o\m\7\3\w\6\s\7\p\g\7\v\4\l\4\0\o\r\4\e\a\c\4\e\6\d\2\4\a\s\t\z\i\d\u\4\3\h\g\u\7\i\e\q\p\x\5\b\b\v\h\r\h\c\d\d\e\6\c\y\6\8\b\f\4\1\9\a\w\q\r\p\3\9\k\s\k\p\a\z\o\i\o\2\s\y\a\g\t\d\k\t\v\4\m\y\1\f\p\z\s\k\h\b\m\y\8\j\j\l\l\i\k\b\w\d\z\l\n\5\t\2\a\j\v\o\l\n\i\0\e\4\9\k\r\n\k\a\i\w\j\4\8\4\h\8\o ]] 00:06:56.285 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.285 19:45:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:56.285 [2024-07-24 19:45:24.905995] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:56.285 [2024-07-24 19:45:24.906116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62222 ] 00:06:56.544 [2024-07-24 19:45:25.041612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.544 [2024-07-24 19:45:25.159794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.802 [2024-07-24 19:45:25.216474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.060  Copying: 512/512 [B] (average 500 kBps) 00:06:57.060 00:06:57.060 19:45:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ toecte2mnm5zr3v6eg2oup6vpqwrb3x1o2w279qqxqnr4486j49o4gmqx8d8yzvtdg9s6nvhpgwujme09o7vvs7ciw10w32jgzqdpiemnzvubdruh73omyu730zx81l7ir9v7sz754ue2zap2ajw9uh4jkn6pyl9pe0a8md19ywffe9bcpo8hkbnho987k7b3jyhkfvvuphq51aof87kx61jquqmc9yttpanoecpt5t6zjxkxs2uroic4t350iggeh8jm46533rvwzo5bryupmyfx194iynxaiz7z7hr0i03s5184nq7mldej0p30m31nbgwn9j1px1pnron20uxvij739oi8413pg4csj101pq03om73w6s7pg7v4l40or4eac4e6d24astzidu43hgu7ieqpx5bbvhrhcdde6cy68bf419awqrp39kskpazoio2syagtdktv4my1fpzskhbmy8jjllikbwdzln5t2ajvolni0e49krnkaiwj484h8o == \t\o\e\c\t\e\2\m\n\m\5\z\r\3\v\6\e\g\2\o\u\p\6\v\p\q\w\r\b\3\x\1\o\2\w\2\7\9\q\q\x\q\n\r\4\4\8\6\j\4\9\o\4\g\m\q\x\8\d\8\y\z\v\t\d\g\9\s\6\n\v\h\p\g\w\u\j\m\e\0\9\o\7\v\v\s\7\c\i\w\1\0\w\3\2\j\g\z\q\d\p\i\e\m\n\z\v\u\b\d\r\u\h\7\3\o\m\y\u\7\3\0\z\x\8\1\l\7\i\r\9\v\7\s\z\7\5\4\u\e\2\z\a\p\2\a\j\w\9\u\h\4\j\k\n\6\p\y\l\9\p\e\0\a\8\m\d\1\9\y\w\f\f\e\9\b\c\p\o\8\h\k\b\n\h\o\9\8\7\k\7\b\3\j\y\h\k\f\v\v\u\p\h\q\5\1\a\o\f\8\7\k\x\6\1\j\q\u\q\m\c\9\y\t\t\p\a\n\o\e\c\p\t\5\t\6\z\j\x\k\x\s\2\u\r\o\i\c\4\t\3\5\0\i\g\g\e\h\8\j\m\4\6\5\3\3\r\v\w\z\o\5\b\r\y\u\p\m\y\f\x\1\9\4\i\y\n\x\a\i\z\7\z\7\h\r\0\i\0\3\s\5\1\8\4\n\q\7\m\l\d\e\j\0\p\3\0\m\3\1\n\b\g\w\n\9\j\1\p\x\1\p\n\r\o\n\2\0\u\x\v\i\j\7\3\9\o\i\8\4\1\3\p\g\4\c\s\j\1\0\1\p\q\0\3\o\m\7\3\w\6\s\7\p\g\7\v\4\l\4\0\o\r\4\e\a\c\4\e\6\d\2\4\a\s\t\z\i\d\u\4\3\h\g\u\7\i\e\q\p\x\5\b\b\v\h\r\h\c\d\d\e\6\c\y\6\8\b\f\4\1\9\a\w\q\r\p\3\9\k\s\k\p\a\z\o\i\o\2\s\y\a\g\t\d\k\t\v\4\m\y\1\f\p\z\s\k\h\b\m\y\8\j\j\l\l\i\k\b\w\d\z\l\n\5\t\2\a\j\v\o\l\n\i\0\e\4\9\k\r\n\k\a\i\w\j\4\8\4\h\8\o ]] 00:06:57.060 19:45:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.060 19:45:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:57.060 [2024-07-24 19:45:25.537388] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:57.060 [2024-07-24 19:45:25.537484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62232 ] 00:06:57.060 [2024-07-24 19:45:25.671482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.318 [2024-07-24 19:45:25.788585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.318 [2024-07-24 19:45:25.843634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.577  Copying: 512/512 [B] (average 250 kBps) 00:06:57.577 00:06:57.577 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ toecte2mnm5zr3v6eg2oup6vpqwrb3x1o2w279qqxqnr4486j49o4gmqx8d8yzvtdg9s6nvhpgwujme09o7vvs7ciw10w32jgzqdpiemnzvubdruh73omyu730zx81l7ir9v7sz754ue2zap2ajw9uh4jkn6pyl9pe0a8md19ywffe9bcpo8hkbnho987k7b3jyhkfvvuphq51aof87kx61jquqmc9yttpanoecpt5t6zjxkxs2uroic4t350iggeh8jm46533rvwzo5bryupmyfx194iynxaiz7z7hr0i03s5184nq7mldej0p30m31nbgwn9j1px1pnron20uxvij739oi8413pg4csj101pq03om73w6s7pg7v4l40or4eac4e6d24astzidu43hgu7ieqpx5bbvhrhcdde6cy68bf419awqrp39kskpazoio2syagtdktv4my1fpzskhbmy8jjllikbwdzln5t2ajvolni0e49krnkaiwj484h8o == \t\o\e\c\t\e\2\m\n\m\5\z\r\3\v\6\e\g\2\o\u\p\6\v\p\q\w\r\b\3\x\1\o\2\w\2\7\9\q\q\x\q\n\r\4\4\8\6\j\4\9\o\4\g\m\q\x\8\d\8\y\z\v\t\d\g\9\s\6\n\v\h\p\g\w\u\j\m\e\0\9\o\7\v\v\s\7\c\i\w\1\0\w\3\2\j\g\z\q\d\p\i\e\m\n\z\v\u\b\d\r\u\h\7\3\o\m\y\u\7\3\0\z\x\8\1\l\7\i\r\9\v\7\s\z\7\5\4\u\e\2\z\a\p\2\a\j\w\9\u\h\4\j\k\n\6\p\y\l\9\p\e\0\a\8\m\d\1\9\y\w\f\f\e\9\b\c\p\o\8\h\k\b\n\h\o\9\8\7\k\7\b\3\j\y\h\k\f\v\v\u\p\h\q\5\1\a\o\f\8\7\k\x\6\1\j\q\u\q\m\c\9\y\t\t\p\a\n\o\e\c\p\t\5\t\6\z\j\x\k\x\s\2\u\r\o\i\c\4\t\3\5\0\i\g\g\e\h\8\j\m\4\6\5\3\3\r\v\w\z\o\5\b\r\y\u\p\m\y\f\x\1\9\4\i\y\n\x\a\i\z\7\z\7\h\r\0\i\0\3\s\5\1\8\4\n\q\7\m\l\d\e\j\0\p\3\0\m\3\1\n\b\g\w\n\9\j\1\p\x\1\p\n\r\o\n\2\0\u\x\v\i\j\7\3\9\o\i\8\4\1\3\p\g\4\c\s\j\1\0\1\p\q\0\3\o\m\7\3\w\6\s\7\p\g\7\v\4\l\4\0\o\r\4\e\a\c\4\e\6\d\2\4\a\s\t\z\i\d\u\4\3\h\g\u\7\i\e\q\p\x\5\b\b\v\h\r\h\c\d\d\e\6\c\y\6\8\b\f\4\1\9\a\w\q\r\p\3\9\k\s\k\p\a\z\o\i\o\2\s\y\a\g\t\d\k\t\v\4\m\y\1\f\p\z\s\k\h\b\m\y\8\j\j\l\l\i\k\b\w\d\z\l\n\5\t\2\a\j\v\o\l\n\i\0\e\4\9\k\r\n\k\a\i\w\j\4\8\4\h\8\o ]] 00:06:57.577 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.577 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:57.577 [2024-07-24 19:45:26.165654] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:57.577 [2024-07-24 19:45:26.165826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62247 ] 00:06:57.835 [2024-07-24 19:45:26.304467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.835 [2024-07-24 19:45:26.408739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.835 [2024-07-24 19:45:26.463265] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.094  Copying: 512/512 [B] (average 250 kBps) 00:06:58.094 00:06:58.094 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ toecte2mnm5zr3v6eg2oup6vpqwrb3x1o2w279qqxqnr4486j49o4gmqx8d8yzvtdg9s6nvhpgwujme09o7vvs7ciw10w32jgzqdpiemnzvubdruh73omyu730zx81l7ir9v7sz754ue2zap2ajw9uh4jkn6pyl9pe0a8md19ywffe9bcpo8hkbnho987k7b3jyhkfvvuphq51aof87kx61jquqmc9yttpanoecpt5t6zjxkxs2uroic4t350iggeh8jm46533rvwzo5bryupmyfx194iynxaiz7z7hr0i03s5184nq7mldej0p30m31nbgwn9j1px1pnron20uxvij739oi8413pg4csj101pq03om73w6s7pg7v4l40or4eac4e6d24astzidu43hgu7ieqpx5bbvhrhcdde6cy68bf419awqrp39kskpazoio2syagtdktv4my1fpzskhbmy8jjllikbwdzln5t2ajvolni0e49krnkaiwj484h8o == \t\o\e\c\t\e\2\m\n\m\5\z\r\3\v\6\e\g\2\o\u\p\6\v\p\q\w\r\b\3\x\1\o\2\w\2\7\9\q\q\x\q\n\r\4\4\8\6\j\4\9\o\4\g\m\q\x\8\d\8\y\z\v\t\d\g\9\s\6\n\v\h\p\g\w\u\j\m\e\0\9\o\7\v\v\s\7\c\i\w\1\0\w\3\2\j\g\z\q\d\p\i\e\m\n\z\v\u\b\d\r\u\h\7\3\o\m\y\u\7\3\0\z\x\8\1\l\7\i\r\9\v\7\s\z\7\5\4\u\e\2\z\a\p\2\a\j\w\9\u\h\4\j\k\n\6\p\y\l\9\p\e\0\a\8\m\d\1\9\y\w\f\f\e\9\b\c\p\o\8\h\k\b\n\h\o\9\8\7\k\7\b\3\j\y\h\k\f\v\v\u\p\h\q\5\1\a\o\f\8\7\k\x\6\1\j\q\u\q\m\c\9\y\t\t\p\a\n\o\e\c\p\t\5\t\6\z\j\x\k\x\s\2\u\r\o\i\c\4\t\3\5\0\i\g\g\e\h\8\j\m\4\6\5\3\3\r\v\w\z\o\5\b\r\y\u\p\m\y\f\x\1\9\4\i\y\n\x\a\i\z\7\z\7\h\r\0\i\0\3\s\5\1\8\4\n\q\7\m\l\d\e\j\0\p\3\0\m\3\1\n\b\g\w\n\9\j\1\p\x\1\p\n\r\o\n\2\0\u\x\v\i\j\7\3\9\o\i\8\4\1\3\p\g\4\c\s\j\1\0\1\p\q\0\3\o\m\7\3\w\6\s\7\p\g\7\v\4\l\4\0\o\r\4\e\a\c\4\e\6\d\2\4\a\s\t\z\i\d\u\4\3\h\g\u\7\i\e\q\p\x\5\b\b\v\h\r\h\c\d\d\e\6\c\y\6\8\b\f\4\1\9\a\w\q\r\p\3\9\k\s\k\p\a\z\o\i\o\2\s\y\a\g\t\d\k\t\v\4\m\y\1\f\p\z\s\k\h\b\m\y\8\j\j\l\l\i\k\b\w\d\z\l\n\5\t\2\a\j\v\o\l\n\i\0\e\4\9\k\r\n\k\a\i\w\j\4\8\4\h\8\o ]] 00:06:58.094 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:58.094 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:58.094 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:58.094 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:58.094 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.094 19:45:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:58.094 [2024-07-24 19:45:26.755197] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:58.094 [2024-07-24 19:45:26.755283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62256 ] 00:06:58.353 [2024-07-24 19:45:26.890287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.353 [2024-07-24 19:45:27.000833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.611 [2024-07-24 19:45:27.055249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.870  Copying: 512/512 [B] (average 500 kBps) 00:06:58.870 00:06:58.870 19:45:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d7ni7msg1bdjxva6cm935cj1tg71tjvwaqbqfs4ii9m8zikxx5r4muw51rr7ewidbtqywrspt2kbmddedh21qqseq9f3je36r55undnyvcu51cfswwm9obohugc000yyqm55tjfh40lw3s5w3ow2qdu7lkolgctht3g8dq1g53xfryrvjdjx2x8gsvgo1slttm1fnsumbo3gyb57woap3xbtbdr36ev1al0vmoeebpb8ap40josiw0k0jvttownf2wmqxg5q4t6qmuupecoflw89kotzn1jznpe6acleq3hxnk8wvf9x0q9zns4ratoww5w5ykxn5phzp42j9gm6p3mx3ye8g62cf8xv70skblv8szswe8wtgksrip4wy7de6f716s81bvi0k1jg49zkh3x1zdapd0swx6cjjevx4f1wqpk5r7lvbpjvzqhbwm82uzo047b4fk88gbipe4la2tfdlayt625rrkzgcv4z3gk278u813kb54ziep7m5x9s == \d\7\n\i\7\m\s\g\1\b\d\j\x\v\a\6\c\m\9\3\5\c\j\1\t\g\7\1\t\j\v\w\a\q\b\q\f\s\4\i\i\9\m\8\z\i\k\x\x\5\r\4\m\u\w\5\1\r\r\7\e\w\i\d\b\t\q\y\w\r\s\p\t\2\k\b\m\d\d\e\d\h\2\1\q\q\s\e\q\9\f\3\j\e\3\6\r\5\5\u\n\d\n\y\v\c\u\5\1\c\f\s\w\w\m\9\o\b\o\h\u\g\c\0\0\0\y\y\q\m\5\5\t\j\f\h\4\0\l\w\3\s\5\w\3\o\w\2\q\d\u\7\l\k\o\l\g\c\t\h\t\3\g\8\d\q\1\g\5\3\x\f\r\y\r\v\j\d\j\x\2\x\8\g\s\v\g\o\1\s\l\t\t\m\1\f\n\s\u\m\b\o\3\g\y\b\5\7\w\o\a\p\3\x\b\t\b\d\r\3\6\e\v\1\a\l\0\v\m\o\e\e\b\p\b\8\a\p\4\0\j\o\s\i\w\0\k\0\j\v\t\t\o\w\n\f\2\w\m\q\x\g\5\q\4\t\6\q\m\u\u\p\e\c\o\f\l\w\8\9\k\o\t\z\n\1\j\z\n\p\e\6\a\c\l\e\q\3\h\x\n\k\8\w\v\f\9\x\0\q\9\z\n\s\4\r\a\t\o\w\w\5\w\5\y\k\x\n\5\p\h\z\p\4\2\j\9\g\m\6\p\3\m\x\3\y\e\8\g\6\2\c\f\8\x\v\7\0\s\k\b\l\v\8\s\z\s\w\e\8\w\t\g\k\s\r\i\p\4\w\y\7\d\e\6\f\7\1\6\s\8\1\b\v\i\0\k\1\j\g\4\9\z\k\h\3\x\1\z\d\a\p\d\0\s\w\x\6\c\j\j\e\v\x\4\f\1\w\q\p\k\5\r\7\l\v\b\p\j\v\z\q\h\b\w\m\8\2\u\z\o\0\4\7\b\4\f\k\8\8\g\b\i\p\e\4\l\a\2\t\f\d\l\a\y\t\6\2\5\r\r\k\z\g\c\v\4\z\3\g\k\2\7\8\u\8\1\3\k\b\5\4\z\i\e\p\7\m\5\x\9\s ]] 00:06:58.871 19:45:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.871 19:45:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:58.871 [2024-07-24 19:45:27.343495] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:58.871 [2024-07-24 19:45:27.343579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62266 ] 00:06:58.871 [2024-07-24 19:45:27.474309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.130 [2024-07-24 19:45:27.588791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.130 [2024-07-24 19:45:27.643491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.389  Copying: 512/512 [B] (average 500 kBps) 00:06:59.389 00:06:59.389 19:45:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d7ni7msg1bdjxva6cm935cj1tg71tjvwaqbqfs4ii9m8zikxx5r4muw51rr7ewidbtqywrspt2kbmddedh21qqseq9f3je36r55undnyvcu51cfswwm9obohugc000yyqm55tjfh40lw3s5w3ow2qdu7lkolgctht3g8dq1g53xfryrvjdjx2x8gsvgo1slttm1fnsumbo3gyb57woap3xbtbdr36ev1al0vmoeebpb8ap40josiw0k0jvttownf2wmqxg5q4t6qmuupecoflw89kotzn1jznpe6acleq3hxnk8wvf9x0q9zns4ratoww5w5ykxn5phzp42j9gm6p3mx3ye8g62cf8xv70skblv8szswe8wtgksrip4wy7de6f716s81bvi0k1jg49zkh3x1zdapd0swx6cjjevx4f1wqpk5r7lvbpjvzqhbwm82uzo047b4fk88gbipe4la2tfdlayt625rrkzgcv4z3gk278u813kb54ziep7m5x9s == \d\7\n\i\7\m\s\g\1\b\d\j\x\v\a\6\c\m\9\3\5\c\j\1\t\g\7\1\t\j\v\w\a\q\b\q\f\s\4\i\i\9\m\8\z\i\k\x\x\5\r\4\m\u\w\5\1\r\r\7\e\w\i\d\b\t\q\y\w\r\s\p\t\2\k\b\m\d\d\e\d\h\2\1\q\q\s\e\q\9\f\3\j\e\3\6\r\5\5\u\n\d\n\y\v\c\u\5\1\c\f\s\w\w\m\9\o\b\o\h\u\g\c\0\0\0\y\y\q\m\5\5\t\j\f\h\4\0\l\w\3\s\5\w\3\o\w\2\q\d\u\7\l\k\o\l\g\c\t\h\t\3\g\8\d\q\1\g\5\3\x\f\r\y\r\v\j\d\j\x\2\x\8\g\s\v\g\o\1\s\l\t\t\m\1\f\n\s\u\m\b\o\3\g\y\b\5\7\w\o\a\p\3\x\b\t\b\d\r\3\6\e\v\1\a\l\0\v\m\o\e\e\b\p\b\8\a\p\4\0\j\o\s\i\w\0\k\0\j\v\t\t\o\w\n\f\2\w\m\q\x\g\5\q\4\t\6\q\m\u\u\p\e\c\o\f\l\w\8\9\k\o\t\z\n\1\j\z\n\p\e\6\a\c\l\e\q\3\h\x\n\k\8\w\v\f\9\x\0\q\9\z\n\s\4\r\a\t\o\w\w\5\w\5\y\k\x\n\5\p\h\z\p\4\2\j\9\g\m\6\p\3\m\x\3\y\e\8\g\6\2\c\f\8\x\v\7\0\s\k\b\l\v\8\s\z\s\w\e\8\w\t\g\k\s\r\i\p\4\w\y\7\d\e\6\f\7\1\6\s\8\1\b\v\i\0\k\1\j\g\4\9\z\k\h\3\x\1\z\d\a\p\d\0\s\w\x\6\c\j\j\e\v\x\4\f\1\w\q\p\k\5\r\7\l\v\b\p\j\v\z\q\h\b\w\m\8\2\u\z\o\0\4\7\b\4\f\k\8\8\g\b\i\p\e\4\l\a\2\t\f\d\l\a\y\t\6\2\5\r\r\k\z\g\c\v\4\z\3\g\k\2\7\8\u\8\1\3\k\b\5\4\z\i\e\p\7\m\5\x\9\s ]] 00:06:59.389 19:45:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.389 19:45:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:59.389 [2024-07-24 19:45:27.941075] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:59.389 [2024-07-24 19:45:27.941200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62275 ] 00:06:59.647 [2024-07-24 19:45:28.076653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.647 [2024-07-24 19:45:28.184940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.647 [2024-07-24 19:45:28.240551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.906  Copying: 512/512 [B] (average 125 kBps) 00:06:59.906 00:06:59.906 19:45:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d7ni7msg1bdjxva6cm935cj1tg71tjvwaqbqfs4ii9m8zikxx5r4muw51rr7ewidbtqywrspt2kbmddedh21qqseq9f3je36r55undnyvcu51cfswwm9obohugc000yyqm55tjfh40lw3s5w3ow2qdu7lkolgctht3g8dq1g53xfryrvjdjx2x8gsvgo1slttm1fnsumbo3gyb57woap3xbtbdr36ev1al0vmoeebpb8ap40josiw0k0jvttownf2wmqxg5q4t6qmuupecoflw89kotzn1jznpe6acleq3hxnk8wvf9x0q9zns4ratoww5w5ykxn5phzp42j9gm6p3mx3ye8g62cf8xv70skblv8szswe8wtgksrip4wy7de6f716s81bvi0k1jg49zkh3x1zdapd0swx6cjjevx4f1wqpk5r7lvbpjvzqhbwm82uzo047b4fk88gbipe4la2tfdlayt625rrkzgcv4z3gk278u813kb54ziep7m5x9s == \d\7\n\i\7\m\s\g\1\b\d\j\x\v\a\6\c\m\9\3\5\c\j\1\t\g\7\1\t\j\v\w\a\q\b\q\f\s\4\i\i\9\m\8\z\i\k\x\x\5\r\4\m\u\w\5\1\r\r\7\e\w\i\d\b\t\q\y\w\r\s\p\t\2\k\b\m\d\d\e\d\h\2\1\q\q\s\e\q\9\f\3\j\e\3\6\r\5\5\u\n\d\n\y\v\c\u\5\1\c\f\s\w\w\m\9\o\b\o\h\u\g\c\0\0\0\y\y\q\m\5\5\t\j\f\h\4\0\l\w\3\s\5\w\3\o\w\2\q\d\u\7\l\k\o\l\g\c\t\h\t\3\g\8\d\q\1\g\5\3\x\f\r\y\r\v\j\d\j\x\2\x\8\g\s\v\g\o\1\s\l\t\t\m\1\f\n\s\u\m\b\o\3\g\y\b\5\7\w\o\a\p\3\x\b\t\b\d\r\3\6\e\v\1\a\l\0\v\m\o\e\e\b\p\b\8\a\p\4\0\j\o\s\i\w\0\k\0\j\v\t\t\o\w\n\f\2\w\m\q\x\g\5\q\4\t\6\q\m\u\u\p\e\c\o\f\l\w\8\9\k\o\t\z\n\1\j\z\n\p\e\6\a\c\l\e\q\3\h\x\n\k\8\w\v\f\9\x\0\q\9\z\n\s\4\r\a\t\o\w\w\5\w\5\y\k\x\n\5\p\h\z\p\4\2\j\9\g\m\6\p\3\m\x\3\y\e\8\g\6\2\c\f\8\x\v\7\0\s\k\b\l\v\8\s\z\s\w\e\8\w\t\g\k\s\r\i\p\4\w\y\7\d\e\6\f\7\1\6\s\8\1\b\v\i\0\k\1\j\g\4\9\z\k\h\3\x\1\z\d\a\p\d\0\s\w\x\6\c\j\j\e\v\x\4\f\1\w\q\p\k\5\r\7\l\v\b\p\j\v\z\q\h\b\w\m\8\2\u\z\o\0\4\7\b\4\f\k\8\8\g\b\i\p\e\4\l\a\2\t\f\d\l\a\y\t\6\2\5\r\r\k\z\g\c\v\4\z\3\g\k\2\7\8\u\8\1\3\k\b\5\4\z\i\e\p\7\m\5\x\9\s ]] 00:06:59.906 19:45:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.906 19:45:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:59.906 [2024-07-24 19:45:28.527000] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:06:59.906 [2024-07-24 19:45:28.527084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62285 ] 00:07:00.165 [2024-07-24 19:45:28.657659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.165 [2024-07-24 19:45:28.772878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.165 [2024-07-24 19:45:28.827805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.424  Copying: 512/512 [B] (average 166 kBps) 00:07:00.424 00:07:00.424 19:45:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d7ni7msg1bdjxva6cm935cj1tg71tjvwaqbqfs4ii9m8zikxx5r4muw51rr7ewidbtqywrspt2kbmddedh21qqseq9f3je36r55undnyvcu51cfswwm9obohugc000yyqm55tjfh40lw3s5w3ow2qdu7lkolgctht3g8dq1g53xfryrvjdjx2x8gsvgo1slttm1fnsumbo3gyb57woap3xbtbdr36ev1al0vmoeebpb8ap40josiw0k0jvttownf2wmqxg5q4t6qmuupecoflw89kotzn1jznpe6acleq3hxnk8wvf9x0q9zns4ratoww5w5ykxn5phzp42j9gm6p3mx3ye8g62cf8xv70skblv8szswe8wtgksrip4wy7de6f716s81bvi0k1jg49zkh3x1zdapd0swx6cjjevx4f1wqpk5r7lvbpjvzqhbwm82uzo047b4fk88gbipe4la2tfdlayt625rrkzgcv4z3gk278u813kb54ziep7m5x9s == \d\7\n\i\7\m\s\g\1\b\d\j\x\v\a\6\c\m\9\3\5\c\j\1\t\g\7\1\t\j\v\w\a\q\b\q\f\s\4\i\i\9\m\8\z\i\k\x\x\5\r\4\m\u\w\5\1\r\r\7\e\w\i\d\b\t\q\y\w\r\s\p\t\2\k\b\m\d\d\e\d\h\2\1\q\q\s\e\q\9\f\3\j\e\3\6\r\5\5\u\n\d\n\y\v\c\u\5\1\c\f\s\w\w\m\9\o\b\o\h\u\g\c\0\0\0\y\y\q\m\5\5\t\j\f\h\4\0\l\w\3\s\5\w\3\o\w\2\q\d\u\7\l\k\o\l\g\c\t\h\t\3\g\8\d\q\1\g\5\3\x\f\r\y\r\v\j\d\j\x\2\x\8\g\s\v\g\o\1\s\l\t\t\m\1\f\n\s\u\m\b\o\3\g\y\b\5\7\w\o\a\p\3\x\b\t\b\d\r\3\6\e\v\1\a\l\0\v\m\o\e\e\b\p\b\8\a\p\4\0\j\o\s\i\w\0\k\0\j\v\t\t\o\w\n\f\2\w\m\q\x\g\5\q\4\t\6\q\m\u\u\p\e\c\o\f\l\w\8\9\k\o\t\z\n\1\j\z\n\p\e\6\a\c\l\e\q\3\h\x\n\k\8\w\v\f\9\x\0\q\9\z\n\s\4\r\a\t\o\w\w\5\w\5\y\k\x\n\5\p\h\z\p\4\2\j\9\g\m\6\p\3\m\x\3\y\e\8\g\6\2\c\f\8\x\v\7\0\s\k\b\l\v\8\s\z\s\w\e\8\w\t\g\k\s\r\i\p\4\w\y\7\d\e\6\f\7\1\6\s\8\1\b\v\i\0\k\1\j\g\4\9\z\k\h\3\x\1\z\d\a\p\d\0\s\w\x\6\c\j\j\e\v\x\4\f\1\w\q\p\k\5\r\7\l\v\b\p\j\v\z\q\h\b\w\m\8\2\u\z\o\0\4\7\b\4\f\k\8\8\g\b\i\p\e\4\l\a\2\t\f\d\l\a\y\t\6\2\5\r\r\k\z\g\c\v\4\z\3\g\k\2\7\8\u\8\1\3\k\b\5\4\z\i\e\p\7\m\5\x\9\s ]] 00:07:00.424 ************************************ 00:07:00.424 END TEST dd_flags_misc 00:07:00.424 ************************************ 00:07:00.424 00:07:00.424 real 0m4.835s 00:07:00.424 user 0m2.831s 00:07:00.424 sys 0m2.151s 00:07:00.424 19:45:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.424 19:45:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:00.684 * Second test run, disabling liburing, forcing AIO 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:00.684 ************************************ 00:07:00.684 START TEST dd_flag_append_forced_aio 00:07:00.684 ************************************ 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=c9kzdeikari04i8rpfhp4gn204xixzu5 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=idpf5jqpkqyl32j1s8xgqflkfeeeiard 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s c9kzdeikari04i8rpfhp4gn204xixzu5 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s idpf5jqpkqyl32j1s8xgqflkfeeeiard 00:07:00.684 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:00.684 [2024-07-24 19:45:29.187363] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:00.684 [2024-07-24 19:45:29.187459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62319 ] 00:07:00.684 [2024-07-24 19:45:29.322729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.943 [2024-07-24 19:45:29.433274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.943 [2024-07-24 19:45:29.485914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.202  Copying: 32/32 [B] (average 31 kBps) 00:07:01.202 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ idpf5jqpkqyl32j1s8xgqflkfeeeiardc9kzdeikari04i8rpfhp4gn204xixzu5 == \i\d\p\f\5\j\q\p\k\q\y\l\3\2\j\1\s\8\x\g\q\f\l\k\f\e\e\e\i\a\r\d\c\9\k\z\d\e\i\k\a\r\i\0\4\i\8\r\p\f\h\p\4\g\n\2\0\4\x\i\x\z\u\5 ]] 00:07:01.202 00:07:01.202 real 0m0.625s 00:07:01.202 user 0m0.353s 00:07:01.202 sys 0m0.149s 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.202 ************************************ 00:07:01.202 END TEST dd_flag_append_forced_aio 00:07:01.202 ************************************ 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:01.202 ************************************ 00:07:01.202 START TEST dd_flag_directory_forced_aio 00:07:01.202 ************************************ 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.202 19:45:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:01.460 [2024-07-24 19:45:29.880094] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:01.460 [2024-07-24 19:45:29.880397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62344 ] 00:07:01.460 [2024-07-24 19:45:30.023677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.460 [2024-07-24 19:45:30.121755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.719 [2024-07-24 19:45:30.175977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.719 [2024-07-24 19:45:30.209350] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.719 [2024-07-24 19:45:30.209425] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:01.719 [2024-07-24 19:45:30.209438] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.719 [2024-07-24 19:45:30.322055] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:01.978 19:45:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:01.978 [2024-07-24 19:45:30.495332] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:01.978 [2024-07-24 19:45:30.495418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62357 ] 00:07:01.978 [2024-07-24 19:45:30.634340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.237 [2024-07-24 19:45:30.722609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.237 [2024-07-24 19:45:30.778319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.237 [2024-07-24 19:45:30.810319] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:02.237 [2024-07-24 19:45:30.810378] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:02.237 [2024-07-24 19:45:30.810408] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.498 [2024-07-24 19:45:30.929190] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.498 00:07:02.498 real 0m1.235s 00:07:02.498 user 0m0.714s 00:07:02.498 sys 0m0.310s 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:02.498 ************************************ 00:07:02.498 END TEST dd_flag_directory_forced_aio 00:07:02.498 ************************************ 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:02.498 ************************************ 00:07:02.498 START TEST dd_flag_nofollow_forced_aio 00:07:02.498 ************************************ 00:07:02.498 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:02.499 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:02.757 [2024-07-24 19:45:31.166323] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:02.757 [2024-07-24 19:45:31.166425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62385 ] 00:07:02.757 [2024-07-24 19:45:31.305907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.757 [2024-07-24 19:45:31.420787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.016 [2024-07-24 19:45:31.475713] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.016 [2024-07-24 19:45:31.511229] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:03.016 [2024-07-24 19:45:31.511289] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:03.016 [2024-07-24 19:45:31.511319] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.016 [2024-07-24 19:45:31.628114] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.274 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:03.274 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.274 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:03.275 19:45:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:03.275 [2024-07-24 19:45:31.784908] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:03.275 [2024-07-24 19:45:31.785010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62395 ] 00:07:03.275 [2024-07-24 19:45:31.917356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.534 [2024-07-24 19:45:32.019419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.534 [2024-07-24 19:45:32.074377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.534 [2024-07-24 19:45:32.106453] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:03.534 [2024-07-24 19:45:32.106508] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:03.534 [2024-07-24 19:45:32.106538] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.793 [2024-07-24 19:45:32.222150] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:03.793 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:03.793 [2024-07-24 19:45:32.387658] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:03.793 [2024-07-24 19:45:32.387790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62408 ] 00:07:04.143 [2024-07-24 19:45:32.522270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.143 [2024-07-24 19:45:32.614862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.143 [2024-07-24 19:45:32.669224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.417  Copying: 512/512 [B] (average 500 kBps) 00:07:04.417 00:07:04.417 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ vrhb8f2a9utfds2szgm2mmry73t2ctfpcxhrjvoat1risxw92szlj56s5qib9l5hnmtn02lujdelh6dj97pr248o2xjxyqyk0i1vpoph9rzere3lvxk3yi2bwmyh4ppz91bvdzraaa7ylv9g6g2u6rkcoc3x3uoibtrl36gl9684b3byornogxpzgn2iqhxy1juhb6bda89734wmenw290kf4h5qr9jhi51th1w2jsntysji55e1hum1j61owknc5xjba4vvxgaucwru37rbnt6l6r05jk57ntsfxzgk9hf97b0aj7o8frkehcd7na6wnjec0ywzgenwk39obky2bzqlycouweuwnx6yz38bb5aa1dsziiiuawhcg2ptvp32khmmk3zh0qjilolozry1x6ypyudb3s7p5fabdvcix4abtclqdoftexwpfdxpv2dvnai765ng7w3ym2746f9h22yw6ea8cwtymqw0mk3iacur9zynj5r5a6q2epud0213 == \v\r\h\b\8\f\2\a\9\u\t\f\d\s\2\s\z\g\m\2\m\m\r\y\7\3\t\2\c\t\f\p\c\x\h\r\j\v\o\a\t\1\r\i\s\x\w\9\2\s\z\l\j\5\6\s\5\q\i\b\9\l\5\h\n\m\t\n\0\2\l\u\j\d\e\l\h\6\d\j\9\7\p\r\2\4\8\o\2\x\j\x\y\q\y\k\0\i\1\v\p\o\p\h\9\r\z\e\r\e\3\l\v\x\k\3\y\i\2\b\w\m\y\h\4\p\p\z\9\1\b\v\d\z\r\a\a\a\7\y\l\v\9\g\6\g\2\u\6\r\k\c\o\c\3\x\3\u\o\i\b\t\r\l\3\6\g\l\9\6\8\4\b\3\b\y\o\r\n\o\g\x\p\z\g\n\2\i\q\h\x\y\1\j\u\h\b\6\b\d\a\8\9\7\3\4\w\m\e\n\w\2\9\0\k\f\4\h\5\q\r\9\j\h\i\5\1\t\h\1\w\2\j\s\n\t\y\s\j\i\5\5\e\1\h\u\m\1\j\6\1\o\w\k\n\c\5\x\j\b\a\4\v\v\x\g\a\u\c\w\r\u\3\7\r\b\n\t\6\l\6\r\0\5\j\k\5\7\n\t\s\f\x\z\g\k\9\h\f\9\7\b\0\a\j\7\o\8\f\r\k\e\h\c\d\7\n\a\6\w\n\j\e\c\0\y\w\z\g\e\n\w\k\3\9\o\b\k\y\2\b\z\q\l\y\c\o\u\w\e\u\w\n\x\6\y\z\3\8\b\b\5\a\a\1\d\s\z\i\i\i\u\a\w\h\c\g\2\p\t\v\p\3\2\k\h\m\m\k\3\z\h\0\q\j\i\l\o\l\o\z\r\y\1\x\6\y\p\y\u\d\b\3\s\7\p\5\f\a\b\d\v\c\i\x\4\a\b\t\c\l\q\d\o\f\t\e\x\w\p\f\d\x\p\v\2\d\v\n\a\i\7\6\5\n\g\7\w\3\y\m\2\7\4\6\f\9\h\2\2\y\w\6\e\a\8\c\w\t\y\m\q\w\0\m\k\3\i\a\c\u\r\9\z\y\n\j\5\r\5\a\6\q\2\e\p\u\d\0\2\1\3 ]] 00:07:04.417 00:07:04.417 real 0m1.856s 00:07:04.417 user 0m1.069s 00:07:04.417 sys 0m0.455s 00:07:04.417 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.417 ************************************ 00:07:04.417 END TEST dd_flag_nofollow_forced_aio 00:07:04.417 19:45:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.417 ************************************ 00:07:04.417 19:45:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:04.417 19:45:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.417 19:45:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.417 19:45:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.417 ************************************ 00:07:04.417 START TEST dd_flag_noatime_forced_aio 00:07:04.417 ************************************ 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721850332 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721850332 00:07:04.417 19:45:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:05.792 19:45:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.792 [2024-07-24 19:45:34.078546] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:05.792 [2024-07-24 19:45:34.078629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62447 ] 00:07:05.792 [2024-07-24 19:45:34.212602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.792 [2024-07-24 19:45:34.317294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.792 [2024-07-24 19:45:34.372402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.052  Copying: 512/512 [B] (average 500 kBps) 00:07:06.052 00:07:06.052 19:45:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.052 19:45:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721850332 )) 00:07:06.052 19:45:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.052 19:45:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721850332 )) 00:07:06.052 19:45:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.052 [2024-07-24 19:45:34.711599] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:06.052 [2024-07-24 19:45:34.711697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62460 ] 00:07:06.311 [2024-07-24 19:45:34.850541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.311 [2024-07-24 19:45:34.960526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.569 [2024-07-24 19:45:35.017899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.828  Copying: 512/512 [B] (average 500 kBps) 00:07:06.828 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721850335 )) 00:07:06.828 00:07:06.828 real 0m2.309s 00:07:06.828 user 0m0.749s 00:07:06.828 sys 0m0.316s 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:06.828 ************************************ 00:07:06.828 END TEST dd_flag_noatime_forced_aio 00:07:06.828 ************************************ 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:06.828 ************************************ 00:07:06.828 START TEST dd_flags_misc_forced_aio 00:07:06.828 ************************************ 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:06.828 19:45:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:06.828 [2024-07-24 19:45:35.458706] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:06.828 [2024-07-24 19:45:35.458942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62492 ] 00:07:07.087 [2024-07-24 19:45:35.605396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.087 [2024-07-24 19:45:35.718117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.346 [2024-07-24 19:45:35.774519] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.605  Copying: 512/512 [B] (average 500 kBps) 00:07:07.606 00:07:07.606 19:45:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 67ofm1cpp714gshs3icn58dlv0doaoh19xk92gpli3jkp4g3xr13dxjb9br7y2fys0zksksimk1gxhx8zb82d44jffx32kygt3udc02rekhkktt05zpg73o2mt6m6rai8gvyludvid9tb5oj853k54q77n4hqoehfcafldm0arso0yt5nxftylufjorjy65fuce6ne1hcbgs96z9qsget7eqnwh8ygm4qmjfqj3x1r38z45eni6m5zir6gggd8gyt98uz4jx7k1z7na4qwr2ufqzpynsm0nw1kas8n0nuoxj0pp82wm2h04wdglnxq88uv7whztqz8sd0ggclmuol698zfw325xmv83nj1lz3aa7xkg2elch1u4coaswd1d4l3af0uah2x7uddzilrexiz60609irgc6956bcjx4992lz0j2d36vd1xnkbhas0asuivit42476jpwm3mqtgby1jw58uftulb46mpn15ymuwzzb6aa49aj466t7uiwz5c == \6\7\o\f\m\1\c\p\p\7\1\4\g\s\h\s\3\i\c\n\5\8\d\l\v\0\d\o\a\o\h\1\9\x\k\9\2\g\p\l\i\3\j\k\p\4\g\3\x\r\1\3\d\x\j\b\9\b\r\7\y\2\f\y\s\0\z\k\s\k\s\i\m\k\1\g\x\h\x\8\z\b\8\2\d\4\4\j\f\f\x\3\2\k\y\g\t\3\u\d\c\0\2\r\e\k\h\k\k\t\t\0\5\z\p\g\7\3\o\2\m\t\6\m\6\r\a\i\8\g\v\y\l\u\d\v\i\d\9\t\b\5\o\j\8\5\3\k\5\4\q\7\7\n\4\h\q\o\e\h\f\c\a\f\l\d\m\0\a\r\s\o\0\y\t\5\n\x\f\t\y\l\u\f\j\o\r\j\y\6\5\f\u\c\e\6\n\e\1\h\c\b\g\s\9\6\z\9\q\s\g\e\t\7\e\q\n\w\h\8\y\g\m\4\q\m\j\f\q\j\3\x\1\r\3\8\z\4\5\e\n\i\6\m\5\z\i\r\6\g\g\g\d\8\g\y\t\9\8\u\z\4\j\x\7\k\1\z\7\n\a\4\q\w\r\2\u\f\q\z\p\y\n\s\m\0\n\w\1\k\a\s\8\n\0\n\u\o\x\j\0\p\p\8\2\w\m\2\h\0\4\w\d\g\l\n\x\q\8\8\u\v\7\w\h\z\t\q\z\8\s\d\0\g\g\c\l\m\u\o\l\6\9\8\z\f\w\3\2\5\x\m\v\8\3\n\j\1\l\z\3\a\a\7\x\k\g\2\e\l\c\h\1\u\4\c\o\a\s\w\d\1\d\4\l\3\a\f\0\u\a\h\2\x\7\u\d\d\z\i\l\r\e\x\i\z\6\0\6\0\9\i\r\g\c\6\9\5\6\b\c\j\x\4\9\9\2\l\z\0\j\2\d\3\6\v\d\1\x\n\k\b\h\a\s\0\a\s\u\i\v\i\t\4\2\4\7\6\j\p\w\m\3\m\q\t\g\b\y\1\j\w\5\8\u\f\t\u\l\b\4\6\m\p\n\1\5\y\m\u\w\z\z\b\6\a\a\4\9\a\j\4\6\6\t\7\u\i\w\z\5\c ]] 00:07:07.606 19:45:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:07.606 19:45:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:07.606 [2024-07-24 19:45:36.121258] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:07.606 [2024-07-24 19:45:36.121363] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62495 ] 00:07:07.606 [2024-07-24 19:45:36.258534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.865 [2024-07-24 19:45:36.367552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.865 [2024-07-24 19:45:36.425996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.123  Copying: 512/512 [B] (average 500 kBps) 00:07:08.123 00:07:08.123 19:45:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 67ofm1cpp714gshs3icn58dlv0doaoh19xk92gpli3jkp4g3xr13dxjb9br7y2fys0zksksimk1gxhx8zb82d44jffx32kygt3udc02rekhkktt05zpg73o2mt6m6rai8gvyludvid9tb5oj853k54q77n4hqoehfcafldm0arso0yt5nxftylufjorjy65fuce6ne1hcbgs96z9qsget7eqnwh8ygm4qmjfqj3x1r38z45eni6m5zir6gggd8gyt98uz4jx7k1z7na4qwr2ufqzpynsm0nw1kas8n0nuoxj0pp82wm2h04wdglnxq88uv7whztqz8sd0ggclmuol698zfw325xmv83nj1lz3aa7xkg2elch1u4coaswd1d4l3af0uah2x7uddzilrexiz60609irgc6956bcjx4992lz0j2d36vd1xnkbhas0asuivit42476jpwm3mqtgby1jw58uftulb46mpn15ymuwzzb6aa49aj466t7uiwz5c == \6\7\o\f\m\1\c\p\p\7\1\4\g\s\h\s\3\i\c\n\5\8\d\l\v\0\d\o\a\o\h\1\9\x\k\9\2\g\p\l\i\3\j\k\p\4\g\3\x\r\1\3\d\x\j\b\9\b\r\7\y\2\f\y\s\0\z\k\s\k\s\i\m\k\1\g\x\h\x\8\z\b\8\2\d\4\4\j\f\f\x\3\2\k\y\g\t\3\u\d\c\0\2\r\e\k\h\k\k\t\t\0\5\z\p\g\7\3\o\2\m\t\6\m\6\r\a\i\8\g\v\y\l\u\d\v\i\d\9\t\b\5\o\j\8\5\3\k\5\4\q\7\7\n\4\h\q\o\e\h\f\c\a\f\l\d\m\0\a\r\s\o\0\y\t\5\n\x\f\t\y\l\u\f\j\o\r\j\y\6\5\f\u\c\e\6\n\e\1\h\c\b\g\s\9\6\z\9\q\s\g\e\t\7\e\q\n\w\h\8\y\g\m\4\q\m\j\f\q\j\3\x\1\r\3\8\z\4\5\e\n\i\6\m\5\z\i\r\6\g\g\g\d\8\g\y\t\9\8\u\z\4\j\x\7\k\1\z\7\n\a\4\q\w\r\2\u\f\q\z\p\y\n\s\m\0\n\w\1\k\a\s\8\n\0\n\u\o\x\j\0\p\p\8\2\w\m\2\h\0\4\w\d\g\l\n\x\q\8\8\u\v\7\w\h\z\t\q\z\8\s\d\0\g\g\c\l\m\u\o\l\6\9\8\z\f\w\3\2\5\x\m\v\8\3\n\j\1\l\z\3\a\a\7\x\k\g\2\e\l\c\h\1\u\4\c\o\a\s\w\d\1\d\4\l\3\a\f\0\u\a\h\2\x\7\u\d\d\z\i\l\r\e\x\i\z\6\0\6\0\9\i\r\g\c\6\9\5\6\b\c\j\x\4\9\9\2\l\z\0\j\2\d\3\6\v\d\1\x\n\k\b\h\a\s\0\a\s\u\i\v\i\t\4\2\4\7\6\j\p\w\m\3\m\q\t\g\b\y\1\j\w\5\8\u\f\t\u\l\b\4\6\m\p\n\1\5\y\m\u\w\z\z\b\6\a\a\4\9\a\j\4\6\6\t\7\u\i\w\z\5\c ]] 00:07:08.123 19:45:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.123 19:45:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:08.123 [2024-07-24 19:45:36.776733] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:08.123 [2024-07-24 19:45:36.776876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62508 ] 00:07:08.381 [2024-07-24 19:45:36.913231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.381 [2024-07-24 19:45:37.036470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.640 [2024-07-24 19:45:37.090931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.899  Copying: 512/512 [B] (average 250 kBps) 00:07:08.899 00:07:08.899 19:45:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 67ofm1cpp714gshs3icn58dlv0doaoh19xk92gpli3jkp4g3xr13dxjb9br7y2fys0zksksimk1gxhx8zb82d44jffx32kygt3udc02rekhkktt05zpg73o2mt6m6rai8gvyludvid9tb5oj853k54q77n4hqoehfcafldm0arso0yt5nxftylufjorjy65fuce6ne1hcbgs96z9qsget7eqnwh8ygm4qmjfqj3x1r38z45eni6m5zir6gggd8gyt98uz4jx7k1z7na4qwr2ufqzpynsm0nw1kas8n0nuoxj0pp82wm2h04wdglnxq88uv7whztqz8sd0ggclmuol698zfw325xmv83nj1lz3aa7xkg2elch1u4coaswd1d4l3af0uah2x7uddzilrexiz60609irgc6956bcjx4992lz0j2d36vd1xnkbhas0asuivit42476jpwm3mqtgby1jw58uftulb46mpn15ymuwzzb6aa49aj466t7uiwz5c == \6\7\o\f\m\1\c\p\p\7\1\4\g\s\h\s\3\i\c\n\5\8\d\l\v\0\d\o\a\o\h\1\9\x\k\9\2\g\p\l\i\3\j\k\p\4\g\3\x\r\1\3\d\x\j\b\9\b\r\7\y\2\f\y\s\0\z\k\s\k\s\i\m\k\1\g\x\h\x\8\z\b\8\2\d\4\4\j\f\f\x\3\2\k\y\g\t\3\u\d\c\0\2\r\e\k\h\k\k\t\t\0\5\z\p\g\7\3\o\2\m\t\6\m\6\r\a\i\8\g\v\y\l\u\d\v\i\d\9\t\b\5\o\j\8\5\3\k\5\4\q\7\7\n\4\h\q\o\e\h\f\c\a\f\l\d\m\0\a\r\s\o\0\y\t\5\n\x\f\t\y\l\u\f\j\o\r\j\y\6\5\f\u\c\e\6\n\e\1\h\c\b\g\s\9\6\z\9\q\s\g\e\t\7\e\q\n\w\h\8\y\g\m\4\q\m\j\f\q\j\3\x\1\r\3\8\z\4\5\e\n\i\6\m\5\z\i\r\6\g\g\g\d\8\g\y\t\9\8\u\z\4\j\x\7\k\1\z\7\n\a\4\q\w\r\2\u\f\q\z\p\y\n\s\m\0\n\w\1\k\a\s\8\n\0\n\u\o\x\j\0\p\p\8\2\w\m\2\h\0\4\w\d\g\l\n\x\q\8\8\u\v\7\w\h\z\t\q\z\8\s\d\0\g\g\c\l\m\u\o\l\6\9\8\z\f\w\3\2\5\x\m\v\8\3\n\j\1\l\z\3\a\a\7\x\k\g\2\e\l\c\h\1\u\4\c\o\a\s\w\d\1\d\4\l\3\a\f\0\u\a\h\2\x\7\u\d\d\z\i\l\r\e\x\i\z\6\0\6\0\9\i\r\g\c\6\9\5\6\b\c\j\x\4\9\9\2\l\z\0\j\2\d\3\6\v\d\1\x\n\k\b\h\a\s\0\a\s\u\i\v\i\t\4\2\4\7\6\j\p\w\m\3\m\q\t\g\b\y\1\j\w\5\8\u\f\t\u\l\b\4\6\m\p\n\1\5\y\m\u\w\z\z\b\6\a\a\4\9\a\j\4\6\6\t\7\u\i\w\z\5\c ]] 00:07:08.899 19:45:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:08.899 19:45:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:08.899 [2024-07-24 19:45:37.464061] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:08.899 [2024-07-24 19:45:37.464180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62521 ] 00:07:09.157 [2024-07-24 19:45:37.602634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.157 [2024-07-24 19:45:37.704148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.157 [2024-07-24 19:45:37.761222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.416  Copying: 512/512 [B] (average 250 kBps) 00:07:09.416 00:07:09.416 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 67ofm1cpp714gshs3icn58dlv0doaoh19xk92gpli3jkp4g3xr13dxjb9br7y2fys0zksksimk1gxhx8zb82d44jffx32kygt3udc02rekhkktt05zpg73o2mt6m6rai8gvyludvid9tb5oj853k54q77n4hqoehfcafldm0arso0yt5nxftylufjorjy65fuce6ne1hcbgs96z9qsget7eqnwh8ygm4qmjfqj3x1r38z45eni6m5zir6gggd8gyt98uz4jx7k1z7na4qwr2ufqzpynsm0nw1kas8n0nuoxj0pp82wm2h04wdglnxq88uv7whztqz8sd0ggclmuol698zfw325xmv83nj1lz3aa7xkg2elch1u4coaswd1d4l3af0uah2x7uddzilrexiz60609irgc6956bcjx4992lz0j2d36vd1xnkbhas0asuivit42476jpwm3mqtgby1jw58uftulb46mpn15ymuwzzb6aa49aj466t7uiwz5c == \6\7\o\f\m\1\c\p\p\7\1\4\g\s\h\s\3\i\c\n\5\8\d\l\v\0\d\o\a\o\h\1\9\x\k\9\2\g\p\l\i\3\j\k\p\4\g\3\x\r\1\3\d\x\j\b\9\b\r\7\y\2\f\y\s\0\z\k\s\k\s\i\m\k\1\g\x\h\x\8\z\b\8\2\d\4\4\j\f\f\x\3\2\k\y\g\t\3\u\d\c\0\2\r\e\k\h\k\k\t\t\0\5\z\p\g\7\3\o\2\m\t\6\m\6\r\a\i\8\g\v\y\l\u\d\v\i\d\9\t\b\5\o\j\8\5\3\k\5\4\q\7\7\n\4\h\q\o\e\h\f\c\a\f\l\d\m\0\a\r\s\o\0\y\t\5\n\x\f\t\y\l\u\f\j\o\r\j\y\6\5\f\u\c\e\6\n\e\1\h\c\b\g\s\9\6\z\9\q\s\g\e\t\7\e\q\n\w\h\8\y\g\m\4\q\m\j\f\q\j\3\x\1\r\3\8\z\4\5\e\n\i\6\m\5\z\i\r\6\g\g\g\d\8\g\y\t\9\8\u\z\4\j\x\7\k\1\z\7\n\a\4\q\w\r\2\u\f\q\z\p\y\n\s\m\0\n\w\1\k\a\s\8\n\0\n\u\o\x\j\0\p\p\8\2\w\m\2\h\0\4\w\d\g\l\n\x\q\8\8\u\v\7\w\h\z\t\q\z\8\s\d\0\g\g\c\l\m\u\o\l\6\9\8\z\f\w\3\2\5\x\m\v\8\3\n\j\1\l\z\3\a\a\7\x\k\g\2\e\l\c\h\1\u\4\c\o\a\s\w\d\1\d\4\l\3\a\f\0\u\a\h\2\x\7\u\d\d\z\i\l\r\e\x\i\z\6\0\6\0\9\i\r\g\c\6\9\5\6\b\c\j\x\4\9\9\2\l\z\0\j\2\d\3\6\v\d\1\x\n\k\b\h\a\s\0\a\s\u\i\v\i\t\4\2\4\7\6\j\p\w\m\3\m\q\t\g\b\y\1\j\w\5\8\u\f\t\u\l\b\4\6\m\p\n\1\5\y\m\u\w\z\z\b\6\a\a\4\9\a\j\4\6\6\t\7\u\i\w\z\5\c ]] 00:07:09.416 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:09.416 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:09.416 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:09.416 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.416 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:09.416 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:09.685 [2024-07-24 19:45:38.117363] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:09.685 [2024-07-24 19:45:38.117458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62523 ] 00:07:09.685 [2024-07-24 19:45:38.255703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.944 [2024-07-24 19:45:38.363705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.944 [2024-07-24 19:45:38.419419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.203  Copying: 512/512 [B] (average 500 kBps) 00:07:10.203 00:07:10.203 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 91rs6xpmpyu6oyemspm3xa248xtywff4v40mmdktkvgvdfqx7pqsoy9feo10leez58867790b05gltz4l12r69yshv0ft75tw7hivhjufkjtbetwmaeh0jijzsa0kkmab7cems0q63dzmgqbjow51s92hiln5jt8ehcha0l7jzzp22nbjvg5hiycpfdcot1k5sdx2hunfol4k22yzwo226kd86h4v9zgp8m5kc1t1zx611ws7220inuy3gtfu2mkm9ny5uzpeb043rqp701c87dw9zyr058ktvc8xpgy8k51qq6iswk2zk7k408dmvvw70i6u0elafyw1hc8zhb12zex19xnbqil6ff5tw5o3qsze30xxp71b4aq71j1e8fxol41osw1362gnnjkg3wzr5h0vekgx9ta5pc5fq8nmgkq8h03fmzw2606xx6kmv1bfs4cqrf3nxuilofum8gnl01h4cav5dq03lsk0af3x8cs5czvt8xpww20a4t5jnzu == \9\1\r\s\6\x\p\m\p\y\u\6\o\y\e\m\s\p\m\3\x\a\2\4\8\x\t\y\w\f\f\4\v\4\0\m\m\d\k\t\k\v\g\v\d\f\q\x\7\p\q\s\o\y\9\f\e\o\1\0\l\e\e\z\5\8\8\6\7\7\9\0\b\0\5\g\l\t\z\4\l\1\2\r\6\9\y\s\h\v\0\f\t\7\5\t\w\7\h\i\v\h\j\u\f\k\j\t\b\e\t\w\m\a\e\h\0\j\i\j\z\s\a\0\k\k\m\a\b\7\c\e\m\s\0\q\6\3\d\z\m\g\q\b\j\o\w\5\1\s\9\2\h\i\l\n\5\j\t\8\e\h\c\h\a\0\l\7\j\z\z\p\2\2\n\b\j\v\g\5\h\i\y\c\p\f\d\c\o\t\1\k\5\s\d\x\2\h\u\n\f\o\l\4\k\2\2\y\z\w\o\2\2\6\k\d\8\6\h\4\v\9\z\g\p\8\m\5\k\c\1\t\1\z\x\6\1\1\w\s\7\2\2\0\i\n\u\y\3\g\t\f\u\2\m\k\m\9\n\y\5\u\z\p\e\b\0\4\3\r\q\p\7\0\1\c\8\7\d\w\9\z\y\r\0\5\8\k\t\v\c\8\x\p\g\y\8\k\5\1\q\q\6\i\s\w\k\2\z\k\7\k\4\0\8\d\m\v\v\w\7\0\i\6\u\0\e\l\a\f\y\w\1\h\c\8\z\h\b\1\2\z\e\x\1\9\x\n\b\q\i\l\6\f\f\5\t\w\5\o\3\q\s\z\e\3\0\x\x\p\7\1\b\4\a\q\7\1\j\1\e\8\f\x\o\l\4\1\o\s\w\1\3\6\2\g\n\n\j\k\g\3\w\z\r\5\h\0\v\e\k\g\x\9\t\a\5\p\c\5\f\q\8\n\m\g\k\q\8\h\0\3\f\m\z\w\2\6\0\6\x\x\6\k\m\v\1\b\f\s\4\c\q\r\f\3\n\x\u\i\l\o\f\u\m\8\g\n\l\0\1\h\4\c\a\v\5\d\q\0\3\l\s\k\0\a\f\3\x\8\c\s\5\c\z\v\t\8\x\p\w\w\2\0\a\4\t\5\j\n\z\u ]] 00:07:10.203 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.203 19:45:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:10.203 [2024-07-24 19:45:38.760790] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:10.203 [2024-07-24 19:45:38.760939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62536 ] 00:07:10.474 [2024-07-24 19:45:38.901900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.474 [2024-07-24 19:45:39.022186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.474 [2024-07-24 19:45:39.079941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.734  Copying: 512/512 [B] (average 500 kBps) 00:07:10.734 00:07:10.734 19:45:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 91rs6xpmpyu6oyemspm3xa248xtywff4v40mmdktkvgvdfqx7pqsoy9feo10leez58867790b05gltz4l12r69yshv0ft75tw7hivhjufkjtbetwmaeh0jijzsa0kkmab7cems0q63dzmgqbjow51s92hiln5jt8ehcha0l7jzzp22nbjvg5hiycpfdcot1k5sdx2hunfol4k22yzwo226kd86h4v9zgp8m5kc1t1zx611ws7220inuy3gtfu2mkm9ny5uzpeb043rqp701c87dw9zyr058ktvc8xpgy8k51qq6iswk2zk7k408dmvvw70i6u0elafyw1hc8zhb12zex19xnbqil6ff5tw5o3qsze30xxp71b4aq71j1e8fxol41osw1362gnnjkg3wzr5h0vekgx9ta5pc5fq8nmgkq8h03fmzw2606xx6kmv1bfs4cqrf3nxuilofum8gnl01h4cav5dq03lsk0af3x8cs5czvt8xpww20a4t5jnzu == \9\1\r\s\6\x\p\m\p\y\u\6\o\y\e\m\s\p\m\3\x\a\2\4\8\x\t\y\w\f\f\4\v\4\0\m\m\d\k\t\k\v\g\v\d\f\q\x\7\p\q\s\o\y\9\f\e\o\1\0\l\e\e\z\5\8\8\6\7\7\9\0\b\0\5\g\l\t\z\4\l\1\2\r\6\9\y\s\h\v\0\f\t\7\5\t\w\7\h\i\v\h\j\u\f\k\j\t\b\e\t\w\m\a\e\h\0\j\i\j\z\s\a\0\k\k\m\a\b\7\c\e\m\s\0\q\6\3\d\z\m\g\q\b\j\o\w\5\1\s\9\2\h\i\l\n\5\j\t\8\e\h\c\h\a\0\l\7\j\z\z\p\2\2\n\b\j\v\g\5\h\i\y\c\p\f\d\c\o\t\1\k\5\s\d\x\2\h\u\n\f\o\l\4\k\2\2\y\z\w\o\2\2\6\k\d\8\6\h\4\v\9\z\g\p\8\m\5\k\c\1\t\1\z\x\6\1\1\w\s\7\2\2\0\i\n\u\y\3\g\t\f\u\2\m\k\m\9\n\y\5\u\z\p\e\b\0\4\3\r\q\p\7\0\1\c\8\7\d\w\9\z\y\r\0\5\8\k\t\v\c\8\x\p\g\y\8\k\5\1\q\q\6\i\s\w\k\2\z\k\7\k\4\0\8\d\m\v\v\w\7\0\i\6\u\0\e\l\a\f\y\w\1\h\c\8\z\h\b\1\2\z\e\x\1\9\x\n\b\q\i\l\6\f\f\5\t\w\5\o\3\q\s\z\e\3\0\x\x\p\7\1\b\4\a\q\7\1\j\1\e\8\f\x\o\l\4\1\o\s\w\1\3\6\2\g\n\n\j\k\g\3\w\z\r\5\h\0\v\e\k\g\x\9\t\a\5\p\c\5\f\q\8\n\m\g\k\q\8\h\0\3\f\m\z\w\2\6\0\6\x\x\6\k\m\v\1\b\f\s\4\c\q\r\f\3\n\x\u\i\l\o\f\u\m\8\g\n\l\0\1\h\4\c\a\v\5\d\q\0\3\l\s\k\0\a\f\3\x\8\c\s\5\c\z\v\t\8\x\p\w\w\2\0\a\4\t\5\j\n\z\u ]] 00:07:10.734 19:45:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.734 19:45:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:10.993 [2024-07-24 19:45:39.432669] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:10.993 [2024-07-24 19:45:39.432775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62549 ] 00:07:10.993 [2024-07-24 19:45:39.565974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.267 [2024-07-24 19:45:39.665245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.267 [2024-07-24 19:45:39.721229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.526  Copying: 512/512 [B] (average 166 kBps) 00:07:11.526 00:07:11.526 19:45:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 91rs6xpmpyu6oyemspm3xa248xtywff4v40mmdktkvgvdfqx7pqsoy9feo10leez58867790b05gltz4l12r69yshv0ft75tw7hivhjufkjtbetwmaeh0jijzsa0kkmab7cems0q63dzmgqbjow51s92hiln5jt8ehcha0l7jzzp22nbjvg5hiycpfdcot1k5sdx2hunfol4k22yzwo226kd86h4v9zgp8m5kc1t1zx611ws7220inuy3gtfu2mkm9ny5uzpeb043rqp701c87dw9zyr058ktvc8xpgy8k51qq6iswk2zk7k408dmvvw70i6u0elafyw1hc8zhb12zex19xnbqil6ff5tw5o3qsze30xxp71b4aq71j1e8fxol41osw1362gnnjkg3wzr5h0vekgx9ta5pc5fq8nmgkq8h03fmzw2606xx6kmv1bfs4cqrf3nxuilofum8gnl01h4cav5dq03lsk0af3x8cs5czvt8xpww20a4t5jnzu == \9\1\r\s\6\x\p\m\p\y\u\6\o\y\e\m\s\p\m\3\x\a\2\4\8\x\t\y\w\f\f\4\v\4\0\m\m\d\k\t\k\v\g\v\d\f\q\x\7\p\q\s\o\y\9\f\e\o\1\0\l\e\e\z\5\8\8\6\7\7\9\0\b\0\5\g\l\t\z\4\l\1\2\r\6\9\y\s\h\v\0\f\t\7\5\t\w\7\h\i\v\h\j\u\f\k\j\t\b\e\t\w\m\a\e\h\0\j\i\j\z\s\a\0\k\k\m\a\b\7\c\e\m\s\0\q\6\3\d\z\m\g\q\b\j\o\w\5\1\s\9\2\h\i\l\n\5\j\t\8\e\h\c\h\a\0\l\7\j\z\z\p\2\2\n\b\j\v\g\5\h\i\y\c\p\f\d\c\o\t\1\k\5\s\d\x\2\h\u\n\f\o\l\4\k\2\2\y\z\w\o\2\2\6\k\d\8\6\h\4\v\9\z\g\p\8\m\5\k\c\1\t\1\z\x\6\1\1\w\s\7\2\2\0\i\n\u\y\3\g\t\f\u\2\m\k\m\9\n\y\5\u\z\p\e\b\0\4\3\r\q\p\7\0\1\c\8\7\d\w\9\z\y\r\0\5\8\k\t\v\c\8\x\p\g\y\8\k\5\1\q\q\6\i\s\w\k\2\z\k\7\k\4\0\8\d\m\v\v\w\7\0\i\6\u\0\e\l\a\f\y\w\1\h\c\8\z\h\b\1\2\z\e\x\1\9\x\n\b\q\i\l\6\f\f\5\t\w\5\o\3\q\s\z\e\3\0\x\x\p\7\1\b\4\a\q\7\1\j\1\e\8\f\x\o\l\4\1\o\s\w\1\3\6\2\g\n\n\j\k\g\3\w\z\r\5\h\0\v\e\k\g\x\9\t\a\5\p\c\5\f\q\8\n\m\g\k\q\8\h\0\3\f\m\z\w\2\6\0\6\x\x\6\k\m\v\1\b\f\s\4\c\q\r\f\3\n\x\u\i\l\o\f\u\m\8\g\n\l\0\1\h\4\c\a\v\5\d\q\0\3\l\s\k\0\a\f\3\x\8\c\s\5\c\z\v\t\8\x\p\w\w\2\0\a\4\t\5\j\n\z\u ]] 00:07:11.526 19:45:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.526 19:45:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:11.526 [2024-07-24 19:45:40.059097] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:11.526 [2024-07-24 19:45:40.059222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62551 ] 00:07:11.785 [2024-07-24 19:45:40.196931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.785 [2024-07-24 19:45:40.305408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.785 [2024-07-24 19:45:40.362064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.060  Copying: 512/512 [B] (average 500 kBps) 00:07:12.060 00:07:12.060 19:45:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 91rs6xpmpyu6oyemspm3xa248xtywff4v40mmdktkvgvdfqx7pqsoy9feo10leez58867790b05gltz4l12r69yshv0ft75tw7hivhjufkjtbetwmaeh0jijzsa0kkmab7cems0q63dzmgqbjow51s92hiln5jt8ehcha0l7jzzp22nbjvg5hiycpfdcot1k5sdx2hunfol4k22yzwo226kd86h4v9zgp8m5kc1t1zx611ws7220inuy3gtfu2mkm9ny5uzpeb043rqp701c87dw9zyr058ktvc8xpgy8k51qq6iswk2zk7k408dmvvw70i6u0elafyw1hc8zhb12zex19xnbqil6ff5tw5o3qsze30xxp71b4aq71j1e8fxol41osw1362gnnjkg3wzr5h0vekgx9ta5pc5fq8nmgkq8h03fmzw2606xx6kmv1bfs4cqrf3nxuilofum8gnl01h4cav5dq03lsk0af3x8cs5czvt8xpww20a4t5jnzu == \9\1\r\s\6\x\p\m\p\y\u\6\o\y\e\m\s\p\m\3\x\a\2\4\8\x\t\y\w\f\f\4\v\4\0\m\m\d\k\t\k\v\g\v\d\f\q\x\7\p\q\s\o\y\9\f\e\o\1\0\l\e\e\z\5\8\8\6\7\7\9\0\b\0\5\g\l\t\z\4\l\1\2\r\6\9\y\s\h\v\0\f\t\7\5\t\w\7\h\i\v\h\j\u\f\k\j\t\b\e\t\w\m\a\e\h\0\j\i\j\z\s\a\0\k\k\m\a\b\7\c\e\m\s\0\q\6\3\d\z\m\g\q\b\j\o\w\5\1\s\9\2\h\i\l\n\5\j\t\8\e\h\c\h\a\0\l\7\j\z\z\p\2\2\n\b\j\v\g\5\h\i\y\c\p\f\d\c\o\t\1\k\5\s\d\x\2\h\u\n\f\o\l\4\k\2\2\y\z\w\o\2\2\6\k\d\8\6\h\4\v\9\z\g\p\8\m\5\k\c\1\t\1\z\x\6\1\1\w\s\7\2\2\0\i\n\u\y\3\g\t\f\u\2\m\k\m\9\n\y\5\u\z\p\e\b\0\4\3\r\q\p\7\0\1\c\8\7\d\w\9\z\y\r\0\5\8\k\t\v\c\8\x\p\g\y\8\k\5\1\q\q\6\i\s\w\k\2\z\k\7\k\4\0\8\d\m\v\v\w\7\0\i\6\u\0\e\l\a\f\y\w\1\h\c\8\z\h\b\1\2\z\e\x\1\9\x\n\b\q\i\l\6\f\f\5\t\w\5\o\3\q\s\z\e\3\0\x\x\p\7\1\b\4\a\q\7\1\j\1\e\8\f\x\o\l\4\1\o\s\w\1\3\6\2\g\n\n\j\k\g\3\w\z\r\5\h\0\v\e\k\g\x\9\t\a\5\p\c\5\f\q\8\n\m\g\k\q\8\h\0\3\f\m\z\w\2\6\0\6\x\x\6\k\m\v\1\b\f\s\4\c\q\r\f\3\n\x\u\i\l\o\f\u\m\8\g\n\l\0\1\h\4\c\a\v\5\d\q\0\3\l\s\k\0\a\f\3\x\8\c\s\5\c\z\v\t\8\x\p\w\w\2\0\a\4\t\5\j\n\z\u ]] 00:07:12.060 00:07:12.060 real 0m5.267s 00:07:12.060 user 0m3.043s 00:07:12.060 sys 0m1.229s 00:07:12.060 19:45:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.060 19:45:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:12.060 ************************************ 00:07:12.060 END TEST dd_flags_misc_forced_aio 00:07:12.060 ************************************ 00:07:12.060 19:45:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:12.060 19:45:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:12.060 19:45:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:12.060 00:07:12.060 real 0m22.639s 00:07:12.060 user 0m11.806s 00:07:12.060 sys 0m6.684s 00:07:12.060 19:45:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.060 19:45:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:12.060 ************************************ 00:07:12.060 END TEST spdk_dd_posix 00:07:12.060 ************************************ 00:07:12.319 19:45:40 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:12.319 19:45:40 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.319 19:45:40 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.319 19:45:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:12.319 ************************************ 00:07:12.319 START TEST spdk_dd_malloc 00:07:12.319 ************************************ 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:12.319 * Looking for test storage... 00:07:12.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:12.319 19:45:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 ************************************ 00:07:12.320 START TEST dd_malloc_copy 00:07:12.320 ************************************ 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:12.320 19:45:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:12.320 [2024-07-24 19:45:40.893161] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:12.320 [2024-07-24 19:45:40.894108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62625 ] 00:07:12.320 { 00:07:12.320 "subsystems": [ 00:07:12.320 { 00:07:12.320 "subsystem": "bdev", 00:07:12.320 "config": [ 00:07:12.320 { 00:07:12.320 "params": { 00:07:12.320 "block_size": 512, 00:07:12.320 "num_blocks": 1048576, 00:07:12.320 "name": "malloc0" 00:07:12.320 }, 00:07:12.320 "method": "bdev_malloc_create" 00:07:12.320 }, 00:07:12.320 { 00:07:12.320 "params": { 00:07:12.320 "block_size": 512, 00:07:12.320 "num_blocks": 1048576, 00:07:12.320 "name": "malloc1" 00:07:12.320 }, 00:07:12.320 "method": "bdev_malloc_create" 00:07:12.320 }, 00:07:12.320 { 00:07:12.320 "method": "bdev_wait_for_examine" 00:07:12.320 } 00:07:12.320 ] 00:07:12.320 } 00:07:12.320 ] 00:07:12.320 } 00:07:12.578 [2024-07-24 19:45:41.036551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.578 [2024-07-24 19:45:41.159374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.578 [2024-07-24 19:45:41.221163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:16.158  Copying: 190/512 [MB] (190 MBps) Copying: 386/512 [MB] (196 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:07:16.158 00:07:16.158 19:45:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:16.158 19:45:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:16.158 19:45:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:16.158 19:45:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:16.417 [2024-07-24 19:45:44.828005] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:16.417 [2024-07-24 19:45:44.828097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62678 ] 00:07:16.417 { 00:07:16.417 "subsystems": [ 00:07:16.417 { 00:07:16.417 "subsystem": "bdev", 00:07:16.417 "config": [ 00:07:16.417 { 00:07:16.417 "params": { 00:07:16.417 "block_size": 512, 00:07:16.417 "num_blocks": 1048576, 00:07:16.417 "name": "malloc0" 00:07:16.417 }, 00:07:16.417 "method": "bdev_malloc_create" 00:07:16.417 }, 00:07:16.417 { 00:07:16.417 "params": { 00:07:16.417 "block_size": 512, 00:07:16.417 "num_blocks": 1048576, 00:07:16.417 "name": "malloc1" 00:07:16.417 }, 00:07:16.417 "method": "bdev_malloc_create" 00:07:16.417 }, 00:07:16.417 { 00:07:16.417 "method": "bdev_wait_for_examine" 00:07:16.417 } 00:07:16.417 ] 00:07:16.417 } 00:07:16.417 ] 00:07:16.417 } 00:07:16.417 [2024-07-24 19:45:44.966558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.417 [2024-07-24 19:45:45.048217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.676 [2024-07-24 19:45:45.107590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.816  Copying: 218/512 [MB] (218 MBps) Copying: 440/512 [MB] (221 MBps) Copying: 512/512 [MB] (average 216 MBps) 00:07:19.816 00:07:19.816 ************************************ 00:07:19.816 END TEST dd_malloc_copy 00:07:19.816 ************************************ 00:07:19.816 00:07:19.816 real 0m7.576s 00:07:19.816 user 0m6.547s 00:07:19.816 sys 0m0.877s 00:07:19.816 19:45:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.816 19:45:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.816 ************************************ 00:07:19.816 END TEST spdk_dd_malloc 00:07:19.816 ************************************ 00:07:19.816 00:07:19.816 real 0m7.721s 00:07:19.816 user 0m6.608s 00:07:19.816 sys 0m0.962s 00:07:19.816 19:45:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.816 19:45:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:20.075 19:45:48 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:20.075 19:45:48 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:20.075 19:45:48 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.075 19:45:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:20.075 ************************************ 00:07:20.075 START TEST spdk_dd_bdev_to_bdev 00:07:20.075 ************************************ 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:20.075 * Looking for test storage... 00:07:20.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.075 ************************************ 00:07:20.075 START TEST dd_inflate_file 00:07:20.075 ************************************ 00:07:20.075 19:45:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:20.075 [2024-07-24 19:45:48.637668] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:20.075 [2024-07-24 19:45:48.637961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62784 ] 00:07:20.334 [2024-07-24 19:45:48.770736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.334 [2024-07-24 19:45:48.882403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.334 [2024-07-24 19:45:48.941692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:20.593  Copying: 64/64 [MB] (average 1523 MBps) 00:07:20.593 00:07:20.593 ************************************ 00:07:20.593 END TEST dd_inflate_file 00:07:20.593 ************************************ 00:07:20.593 00:07:20.593 real 0m0.643s 00:07:20.593 user 0m0.383s 00:07:20.593 sys 0m0.316s 00:07:20.593 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.593 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.851 ************************************ 00:07:20.851 START TEST dd_copy_to_out_bdev 00:07:20.851 ************************************ 00:07:20.851 19:45:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:20.851 [2024-07-24 19:45:49.342717] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:20.851 [2024-07-24 19:45:49.342818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62817 ] 00:07:20.851 { 00:07:20.851 "subsystems": [ 00:07:20.851 { 00:07:20.851 "subsystem": "bdev", 00:07:20.851 "config": [ 00:07:20.851 { 00:07:20.851 "params": { 00:07:20.851 "trtype": "pcie", 00:07:20.851 "traddr": "0000:00:10.0", 00:07:20.851 "name": "Nvme0" 00:07:20.851 }, 00:07:20.851 "method": "bdev_nvme_attach_controller" 00:07:20.851 }, 00:07:20.851 { 00:07:20.851 "params": { 00:07:20.851 "trtype": "pcie", 00:07:20.851 "traddr": "0000:00:11.0", 00:07:20.851 "name": "Nvme1" 00:07:20.851 }, 00:07:20.851 "method": "bdev_nvme_attach_controller" 00:07:20.851 }, 00:07:20.852 { 00:07:20.852 "method": "bdev_wait_for_examine" 00:07:20.852 } 00:07:20.852 ] 00:07:20.852 } 00:07:20.852 ] 00:07:20.852 } 00:07:20.852 [2024-07-24 19:45:49.479708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.110 [2024-07-24 19:45:49.602298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.110 [2024-07-24 19:45:49.661191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:22.740  Copying: 57/64 [MB] (57 MBps) Copying: 64/64 [MB] (average 55 MBps) 00:07:22.740 00:07:22.740 00:07:22.740 real 0m1.955s 00:07:22.740 user 0m1.707s 00:07:22.740 sys 0m1.515s 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:22.740 ************************************ 00:07:22.740 END TEST dd_copy_to_out_bdev 00:07:22.740 ************************************ 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:22.740 ************************************ 00:07:22.740 START TEST dd_offset_magic 00:07:22.740 ************************************ 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:22.740 19:45:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:22.740 [2024-07-24 19:45:51.358920] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:22.740 [2024-07-24 19:45:51.359031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62862 ] 00:07:22.740 { 00:07:22.740 "subsystems": [ 00:07:22.740 { 00:07:22.740 "subsystem": "bdev", 00:07:22.740 "config": [ 00:07:22.740 { 00:07:22.740 "params": { 00:07:22.740 "trtype": "pcie", 00:07:22.740 "traddr": "0000:00:10.0", 00:07:22.740 "name": "Nvme0" 00:07:22.740 }, 00:07:22.740 "method": "bdev_nvme_attach_controller" 00:07:22.740 }, 00:07:22.740 { 00:07:22.740 "params": { 00:07:22.740 "trtype": "pcie", 00:07:22.740 "traddr": "0000:00:11.0", 00:07:22.740 "name": "Nvme1" 00:07:22.740 }, 00:07:22.740 "method": "bdev_nvme_attach_controller" 00:07:22.740 }, 00:07:22.740 { 00:07:22.740 "method": "bdev_wait_for_examine" 00:07:22.740 } 00:07:22.740 ] 00:07:22.740 } 00:07:22.740 ] 00:07:22.740 } 00:07:23.000 [2024-07-24 19:45:51.497899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.000 [2024-07-24 19:45:51.614839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.274 [2024-07-24 19:45:51.673629] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.533  Copying: 65/65 [MB] (average 955 MBps) 00:07:23.533 00:07:23.533 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:23.533 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:23.533 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:23.533 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:23.791 { 00:07:23.791 "subsystems": [ 00:07:23.791 { 00:07:23.791 "subsystem": "bdev", 00:07:23.791 "config": [ 00:07:23.791 { 00:07:23.791 "params": { 00:07:23.791 "trtype": "pcie", 00:07:23.791 "traddr": "0000:00:10.0", 00:07:23.791 "name": "Nvme0" 00:07:23.791 }, 00:07:23.791 "method": "bdev_nvme_attach_controller" 00:07:23.791 }, 00:07:23.791 { 00:07:23.791 "params": { 00:07:23.791 "trtype": "pcie", 00:07:23.791 "traddr": "0000:00:11.0", 00:07:23.791 "name": "Nvme1" 00:07:23.791 }, 00:07:23.791 "method": "bdev_nvme_attach_controller" 00:07:23.791 }, 00:07:23.791 { 00:07:23.791 "method": "bdev_wait_for_examine" 00:07:23.791 } 00:07:23.791 ] 00:07:23.791 } 00:07:23.791 ] 00:07:23.791 } 00:07:23.791 [2024-07-24 19:45:52.232393] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:23.791 [2024-07-24 19:45:52.232487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62882 ] 00:07:23.791 [2024-07-24 19:45:52.369654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.049 [2024-07-24 19:45:52.459725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.050 [2024-07-24 19:45:52.517162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:24.309  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:24.309 00:07:24.309 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:24.309 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:24.309 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:24.309 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:24.309 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:24.309 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:24.309 19:45:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:24.309 [2024-07-24 19:45:52.967244] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:24.309 [2024-07-24 19:45:52.967325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62899 ] 00:07:24.567 { 00:07:24.567 "subsystems": [ 00:07:24.567 { 00:07:24.567 "subsystem": "bdev", 00:07:24.567 "config": [ 00:07:24.567 { 00:07:24.567 "params": { 00:07:24.567 "trtype": "pcie", 00:07:24.567 "traddr": "0000:00:10.0", 00:07:24.567 "name": "Nvme0" 00:07:24.567 }, 00:07:24.567 "method": "bdev_nvme_attach_controller" 00:07:24.567 }, 00:07:24.567 { 00:07:24.567 "params": { 00:07:24.567 "trtype": "pcie", 00:07:24.567 "traddr": "0000:00:11.0", 00:07:24.567 "name": "Nvme1" 00:07:24.567 }, 00:07:24.567 "method": "bdev_nvme_attach_controller" 00:07:24.567 }, 00:07:24.567 { 00:07:24.567 "method": "bdev_wait_for_examine" 00:07:24.567 } 00:07:24.567 ] 00:07:24.567 } 00:07:24.567 ] 00:07:24.567 } 00:07:24.567 [2024-07-24 19:45:53.106298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.567 [2024-07-24 19:45:53.221304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.825 [2024-07-24 19:45:53.280787] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:25.342  Copying: 65/65 [MB] (average 984 MBps) 00:07:25.343 00:07:25.343 19:45:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:25.343 19:45:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:25.343 19:45:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:25.343 19:45:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:25.343 [2024-07-24 19:45:53.853606] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:25.343 [2024-07-24 19:45:53.853721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62919 ] 00:07:25.343 { 00:07:25.343 "subsystems": [ 00:07:25.343 { 00:07:25.343 "subsystem": "bdev", 00:07:25.343 "config": [ 00:07:25.343 { 00:07:25.343 "params": { 00:07:25.343 "trtype": "pcie", 00:07:25.343 "traddr": "0000:00:10.0", 00:07:25.343 "name": "Nvme0" 00:07:25.343 }, 00:07:25.343 "method": "bdev_nvme_attach_controller" 00:07:25.343 }, 00:07:25.343 { 00:07:25.343 "params": { 00:07:25.343 "trtype": "pcie", 00:07:25.343 "traddr": "0000:00:11.0", 00:07:25.343 "name": "Nvme1" 00:07:25.343 }, 00:07:25.343 "method": "bdev_nvme_attach_controller" 00:07:25.343 }, 00:07:25.343 { 00:07:25.343 "method": "bdev_wait_for_examine" 00:07:25.343 } 00:07:25.343 ] 00:07:25.343 } 00:07:25.343 ] 00:07:25.343 } 00:07:25.343 [2024-07-24 19:45:53.994928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.601 [2024-07-24 19:45:54.104854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.601 [2024-07-24 19:45:54.158808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.117  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:26.117 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:26.117 00:07:26.117 real 0m3.245s 00:07:26.117 user 0m2.359s 00:07:26.117 sys 0m0.965s 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.117 ************************************ 00:07:26.117 END TEST dd_offset_magic 00:07:26.117 ************************************ 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:26.117 19:45:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.117 [2024-07-24 19:45:54.644358] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:26.117 [2024-07-24 19:45:54.644456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62954 ] 00:07:26.117 { 00:07:26.117 "subsystems": [ 00:07:26.117 { 00:07:26.117 "subsystem": "bdev", 00:07:26.117 "config": [ 00:07:26.117 { 00:07:26.117 "params": { 00:07:26.117 "trtype": "pcie", 00:07:26.117 "traddr": "0000:00:10.0", 00:07:26.117 "name": "Nvme0" 00:07:26.117 }, 00:07:26.117 "method": "bdev_nvme_attach_controller" 00:07:26.117 }, 00:07:26.117 { 00:07:26.117 "params": { 00:07:26.117 "trtype": "pcie", 00:07:26.117 "traddr": "0000:00:11.0", 00:07:26.117 "name": "Nvme1" 00:07:26.117 }, 00:07:26.117 "method": "bdev_nvme_attach_controller" 00:07:26.117 }, 00:07:26.117 { 00:07:26.117 "method": "bdev_wait_for_examine" 00:07:26.117 } 00:07:26.117 ] 00:07:26.117 } 00:07:26.117 ] 00:07:26.117 } 00:07:26.117 [2024-07-24 19:45:54.781382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.375 [2024-07-24 19:45:54.872639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.375 [2024-07-24 19:45:54.930811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.892  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:26.892 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:26.892 19:45:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.892 { 00:07:26.892 "subsystems": [ 00:07:26.892 { 00:07:26.892 "subsystem": "bdev", 00:07:26.892 "config": [ 00:07:26.892 { 00:07:26.892 "params": { 00:07:26.892 "trtype": "pcie", 00:07:26.892 "traddr": "0000:00:10.0", 00:07:26.892 "name": "Nvme0" 00:07:26.892 }, 00:07:26.892 "method": "bdev_nvme_attach_controller" 00:07:26.892 }, 00:07:26.892 { 00:07:26.892 "params": { 00:07:26.892 "trtype": "pcie", 00:07:26.892 "traddr": "0000:00:11.0", 00:07:26.892 "name": "Nvme1" 00:07:26.892 }, 00:07:26.892 "method": "bdev_nvme_attach_controller" 00:07:26.892 }, 00:07:26.892 { 00:07:26.892 "method": "bdev_wait_for_examine" 00:07:26.892 } 00:07:26.892 ] 00:07:26.892 } 00:07:26.892 ] 00:07:26.892 } 00:07:26.892 [2024-07-24 19:45:55.441089] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:26.892 [2024-07-24 19:45:55.441508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62975 ] 00:07:27.150 [2024-07-24 19:45:55.574770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.150 [2024-07-24 19:45:55.702734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.150 [2024-07-24 19:45:55.762594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.668  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:27.668 00:07:27.668 19:45:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:27.668 ************************************ 00:07:27.668 END TEST spdk_dd_bdev_to_bdev 00:07:27.668 ************************************ 00:07:27.668 00:07:27.668 real 0m7.699s 00:07:27.668 user 0m5.669s 00:07:27.668 sys 0m3.535s 00:07:27.668 19:45:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.668 19:45:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:27.668 19:45:56 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:27.668 19:45:56 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:27.668 19:45:56 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.668 19:45:56 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.668 19:45:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:27.668 ************************************ 00:07:27.668 START TEST spdk_dd_uring 00:07:27.668 ************************************ 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:27.668 * Looking for test storage... 00:07:27.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:27.668 19:45:56 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:27.927 ************************************ 00:07:27.927 START TEST dd_uring_copy 00:07:27.927 ************************************ 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:27.927 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=xhw5t9p5df3uve1860ni0jlxqua4te4stxqi76wjbzf4yqmc35gmrnys88cmb826kp1c0mrbloeap3whydmzhq1oup3wo2v20m89n71o7bx0skcdmkt6c8ep9u0u7oalo3umwby1m2q5327wxi4flab517qe4y4o7172cbg3cjdxh3cvf3409kvvs8qozvl23do2fcji0z7whtaqcfesadj3kalh6kgmskg044qshugg1sbpgax9li30fj08ryozqor947xrdiiec571272hwrrm1ynjlc88yj70molgxojoivz6k39nns5ecp28i8a9ka8u43er9x82d4m3kaaly378xtbloo6pta9pdwbdav25603l2sti40z353fpt1n69ytwaqhmgqii9xasjwoa5a24i4rk4pars9a9qrt3dka1vzo89cmkpmfj98xv82gxehe1i885rs74l7sh4t6ejawnr2zbs975yyew12lynmb457qt0y6kauvehlscktssqxboxp5j4pj4kw0n0oqkk5arwy1ff4d3oynt1bu2hz7jzsb8psol4svtlixmi97tetktjxwp8atevzvlnecwt3jnocsxu3v8klt88p5ysisnjwnc9k1vq25sle3wrn0xccskzyz49e0vj9ihywe02u9h7jvgt6yhgust4f0xhl590ji6vbh28w92q4fzjzvr2kje8kk156cg8zacyt8erv64j9t0cga0ls9ocvn3unmeb2h9phlhqojs9s1dzqxvq2uu5k1ghfiuzvmceh86e4wsh6p5c21lmmryjotl9qpe52bnwf9bkqwtg6s9cnk8a3q7jw65zjnv50j70gthy7320og3864dece1fadjuktncu0b4bkwpoib0nd0p5vzaqi5te5m387squ9t05yrdb0ahhg61s3c6wgcb9rfcvkoypcy95ejpamqzqabexjc1mnxnkkr0mpvb2txygaua6ajilhhvh4lb7w6rzoctfslpvgah3cl4fbg7asd57gq 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo xhw5t9p5df3uve1860ni0jlxqua4te4stxqi76wjbzf4yqmc35gmrnys88cmb826kp1c0mrbloeap3whydmzhq1oup3wo2v20m89n71o7bx0skcdmkt6c8ep9u0u7oalo3umwby1m2q5327wxi4flab517qe4y4o7172cbg3cjdxh3cvf3409kvvs8qozvl23do2fcji0z7whtaqcfesadj3kalh6kgmskg044qshugg1sbpgax9li30fj08ryozqor947xrdiiec571272hwrrm1ynjlc88yj70molgxojoivz6k39nns5ecp28i8a9ka8u43er9x82d4m3kaaly378xtbloo6pta9pdwbdav25603l2sti40z353fpt1n69ytwaqhmgqii9xasjwoa5a24i4rk4pars9a9qrt3dka1vzo89cmkpmfj98xv82gxehe1i885rs74l7sh4t6ejawnr2zbs975yyew12lynmb457qt0y6kauvehlscktssqxboxp5j4pj4kw0n0oqkk5arwy1ff4d3oynt1bu2hz7jzsb8psol4svtlixmi97tetktjxwp8atevzvlnecwt3jnocsxu3v8klt88p5ysisnjwnc9k1vq25sle3wrn0xccskzyz49e0vj9ihywe02u9h7jvgt6yhgust4f0xhl590ji6vbh28w92q4fzjzvr2kje8kk156cg8zacyt8erv64j9t0cga0ls9ocvn3unmeb2h9phlhqojs9s1dzqxvq2uu5k1ghfiuzvmceh86e4wsh6p5c21lmmryjotl9qpe52bnwf9bkqwtg6s9cnk8a3q7jw65zjnv50j70gthy7320og3864dece1fadjuktncu0b4bkwpoib0nd0p5vzaqi5te5m387squ9t05yrdb0ahhg61s3c6wgcb9rfcvkoypcy95ejpamqzqabexjc1mnxnkkr0mpvb2txygaua6ajilhhvh4lb7w6rzoctfslpvgah3cl4fbg7asd57gq 00:07:27.928 19:45:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:27.928 [2024-07-24 19:45:56.416385] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:27.928 [2024-07-24 19:45:56.416666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63045 ] 00:07:27.928 [2024-07-24 19:45:56.553520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.187 [2024-07-24 19:45:56.666583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.187 [2024-07-24 19:45:56.728547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.323  Copying: 511/511 [MB] (average 1145 MBps) 00:07:29.323 00:07:29.323 19:45:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:29.323 19:45:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:29.323 19:45:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:29.323 19:45:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.323 [2024-07-24 19:45:57.893343] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:29.323 [2024-07-24 19:45:57.893442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63067 ] 00:07:29.323 { 00:07:29.323 "subsystems": [ 00:07:29.323 { 00:07:29.323 "subsystem": "bdev", 00:07:29.323 "config": [ 00:07:29.323 { 00:07:29.323 "params": { 00:07:29.323 "block_size": 512, 00:07:29.323 "num_blocks": 1048576, 00:07:29.323 "name": "malloc0" 00:07:29.323 }, 00:07:29.323 "method": "bdev_malloc_create" 00:07:29.323 }, 00:07:29.323 { 00:07:29.323 "params": { 00:07:29.323 "filename": "/dev/zram1", 00:07:29.323 "name": "uring0" 00:07:29.323 }, 00:07:29.323 "method": "bdev_uring_create" 00:07:29.323 }, 00:07:29.323 { 00:07:29.323 "method": "bdev_wait_for_examine" 00:07:29.323 } 00:07:29.323 ] 00:07:29.323 } 00:07:29.323 ] 00:07:29.323 } 00:07:29.582 [2024-07-24 19:45:58.033888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.582 [2024-07-24 19:45:58.113580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.582 [2024-07-24 19:45:58.168347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.721  Copying: 230/512 [MB] (230 MBps) Copying: 439/512 [MB] (208 MBps) Copying: 512/512 [MB] (average 218 MBps) 00:07:32.721 00:07:32.721 19:46:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:32.721 19:46:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:32.721 19:46:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:32.721 19:46:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.721 [2024-07-24 19:46:01.216554] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:32.721 [2024-07-24 19:46:01.216651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63111 ] 00:07:32.721 { 00:07:32.721 "subsystems": [ 00:07:32.721 { 00:07:32.721 "subsystem": "bdev", 00:07:32.721 "config": [ 00:07:32.721 { 00:07:32.721 "params": { 00:07:32.721 "block_size": 512, 00:07:32.721 "num_blocks": 1048576, 00:07:32.721 "name": "malloc0" 00:07:32.721 }, 00:07:32.721 "method": "bdev_malloc_create" 00:07:32.721 }, 00:07:32.721 { 00:07:32.721 "params": { 00:07:32.721 "filename": "/dev/zram1", 00:07:32.721 "name": "uring0" 00:07:32.721 }, 00:07:32.721 "method": "bdev_uring_create" 00:07:32.721 }, 00:07:32.721 { 00:07:32.721 "method": "bdev_wait_for_examine" 00:07:32.721 } 00:07:32.721 ] 00:07:32.721 } 00:07:32.721 ] 00:07:32.721 } 00:07:32.721 [2024-07-24 19:46:01.359752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.979 [2024-07-24 19:46:01.488049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.979 [2024-07-24 19:46:01.549883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.745  Copying: 173/512 [MB] (173 MBps) Copying: 339/512 [MB] (166 MBps) Copying: 489/512 [MB] (150 MBps) Copying: 512/512 [MB] (average 163 MBps) 00:07:36.745 00:07:36.745 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:36.745 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ xhw5t9p5df3uve1860ni0jlxqua4te4stxqi76wjbzf4yqmc35gmrnys88cmb826kp1c0mrbloeap3whydmzhq1oup3wo2v20m89n71o7bx0skcdmkt6c8ep9u0u7oalo3umwby1m2q5327wxi4flab517qe4y4o7172cbg3cjdxh3cvf3409kvvs8qozvl23do2fcji0z7whtaqcfesadj3kalh6kgmskg044qshugg1sbpgax9li30fj08ryozqor947xrdiiec571272hwrrm1ynjlc88yj70molgxojoivz6k39nns5ecp28i8a9ka8u43er9x82d4m3kaaly378xtbloo6pta9pdwbdav25603l2sti40z353fpt1n69ytwaqhmgqii9xasjwoa5a24i4rk4pars9a9qrt3dka1vzo89cmkpmfj98xv82gxehe1i885rs74l7sh4t6ejawnr2zbs975yyew12lynmb457qt0y6kauvehlscktssqxboxp5j4pj4kw0n0oqkk5arwy1ff4d3oynt1bu2hz7jzsb8psol4svtlixmi97tetktjxwp8atevzvlnecwt3jnocsxu3v8klt88p5ysisnjwnc9k1vq25sle3wrn0xccskzyz49e0vj9ihywe02u9h7jvgt6yhgust4f0xhl590ji6vbh28w92q4fzjzvr2kje8kk156cg8zacyt8erv64j9t0cga0ls9ocvn3unmeb2h9phlhqojs9s1dzqxvq2uu5k1ghfiuzvmceh86e4wsh6p5c21lmmryjotl9qpe52bnwf9bkqwtg6s9cnk8a3q7jw65zjnv50j70gthy7320og3864dece1fadjuktncu0b4bkwpoib0nd0p5vzaqi5te5m387squ9t05yrdb0ahhg61s3c6wgcb9rfcvkoypcy95ejpamqzqabexjc1mnxnkkr0mpvb2txygaua6ajilhhvh4lb7w6rzoctfslpvgah3cl4fbg7asd57gq == \x\h\w\5\t\9\p\5\d\f\3\u\v\e\1\8\6\0\n\i\0\j\l\x\q\u\a\4\t\e\4\s\t\x\q\i\7\6\w\j\b\z\f\4\y\q\m\c\3\5\g\m\r\n\y\s\8\8\c\m\b\8\2\6\k\p\1\c\0\m\r\b\l\o\e\a\p\3\w\h\y\d\m\z\h\q\1\o\u\p\3\w\o\2\v\2\0\m\8\9\n\7\1\o\7\b\x\0\s\k\c\d\m\k\t\6\c\8\e\p\9\u\0\u\7\o\a\l\o\3\u\m\w\b\y\1\m\2\q\5\3\2\7\w\x\i\4\f\l\a\b\5\1\7\q\e\4\y\4\o\7\1\7\2\c\b\g\3\c\j\d\x\h\3\c\v\f\3\4\0\9\k\v\v\s\8\q\o\z\v\l\2\3\d\o\2\f\c\j\i\0\z\7\w\h\t\a\q\c\f\e\s\a\d\j\3\k\a\l\h\6\k\g\m\s\k\g\0\4\4\q\s\h\u\g\g\1\s\b\p\g\a\x\9\l\i\3\0\f\j\0\8\r\y\o\z\q\o\r\9\4\7\x\r\d\i\i\e\c\5\7\1\2\7\2\h\w\r\r\m\1\y\n\j\l\c\8\8\y\j\7\0\m\o\l\g\x\o\j\o\i\v\z\6\k\3\9\n\n\s\5\e\c\p\2\8\i\8\a\9\k\a\8\u\4\3\e\r\9\x\8\2\d\4\m\3\k\a\a\l\y\3\7\8\x\t\b\l\o\o\6\p\t\a\9\p\d\w\b\d\a\v\2\5\6\0\3\l\2\s\t\i\4\0\z\3\5\3\f\p\t\1\n\6\9\y\t\w\a\q\h\m\g\q\i\i\9\x\a\s\j\w\o\a\5\a\2\4\i\4\r\k\4\p\a\r\s\9\a\9\q\r\t\3\d\k\a\1\v\z\o\8\9\c\m\k\p\m\f\j\9\8\x\v\8\2\g\x\e\h\e\1\i\8\8\5\r\s\7\4\l\7\s\h\4\t\6\e\j\a\w\n\r\2\z\b\s\9\7\5\y\y\e\w\1\2\l\y\n\m\b\4\5\7\q\t\0\y\6\k\a\u\v\e\h\l\s\c\k\t\s\s\q\x\b\o\x\p\5\j\4\p\j\4\k\w\0\n\0\o\q\k\k\5\a\r\w\y\1\f\f\4\d\3\o\y\n\t\1\b\u\2\h\z\7\j\z\s\b\8\p\s\o\l\4\s\v\t\l\i\x\m\i\9\7\t\e\t\k\t\j\x\w\p\8\a\t\e\v\z\v\l\n\e\c\w\t\3\j\n\o\c\s\x\u\3\v\8\k\l\t\8\8\p\5\y\s\i\s\n\j\w\n\c\9\k\1\v\q\2\5\s\l\e\3\w\r\n\0\x\c\c\s\k\z\y\z\4\9\e\0\v\j\9\i\h\y\w\e\0\2\u\9\h\7\j\v\g\t\6\y\h\g\u\s\t\4\f\0\x\h\l\5\9\0\j\i\6\v\b\h\2\8\w\9\2\q\4\f\z\j\z\v\r\2\k\j\e\8\k\k\1\5\6\c\g\8\z\a\c\y\t\8\e\r\v\6\4\j\9\t\0\c\g\a\0\l\s\9\o\c\v\n\3\u\n\m\e\b\2\h\9\p\h\l\h\q\o\j\s\9\s\1\d\z\q\x\v\q\2\u\u\5\k\1\g\h\f\i\u\z\v\m\c\e\h\8\6\e\4\w\s\h\6\p\5\c\2\1\l\m\m\r\y\j\o\t\l\9\q\p\e\5\2\b\n\w\f\9\b\k\q\w\t\g\6\s\9\c\n\k\8\a\3\q\7\j\w\6\5\z\j\n\v\5\0\j\7\0\g\t\h\y\7\3\2\0\o\g\3\8\6\4\d\e\c\e\1\f\a\d\j\u\k\t\n\c\u\0\b\4\b\k\w\p\o\i\b\0\n\d\0\p\5\v\z\a\q\i\5\t\e\5\m\3\8\7\s\q\u\9\t\0\5\y\r\d\b\0\a\h\h\g\6\1\s\3\c\6\w\g\c\b\9\r\f\c\v\k\o\y\p\c\y\9\5\e\j\p\a\m\q\z\q\a\b\e\x\j\c\1\m\n\x\n\k\k\r\0\m\p\v\b\2\t\x\y\g\a\u\a\6\a\j\i\l\h\h\v\h\4\l\b\7\w\6\r\z\o\c\t\f\s\l\p\v\g\a\h\3\c\l\4\f\b\g\7\a\s\d\5\7\g\q ]] 00:07:36.745 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:36.745 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ xhw5t9p5df3uve1860ni0jlxqua4te4stxqi76wjbzf4yqmc35gmrnys88cmb826kp1c0mrbloeap3whydmzhq1oup3wo2v20m89n71o7bx0skcdmkt6c8ep9u0u7oalo3umwby1m2q5327wxi4flab517qe4y4o7172cbg3cjdxh3cvf3409kvvs8qozvl23do2fcji0z7whtaqcfesadj3kalh6kgmskg044qshugg1sbpgax9li30fj08ryozqor947xrdiiec571272hwrrm1ynjlc88yj70molgxojoivz6k39nns5ecp28i8a9ka8u43er9x82d4m3kaaly378xtbloo6pta9pdwbdav25603l2sti40z353fpt1n69ytwaqhmgqii9xasjwoa5a24i4rk4pars9a9qrt3dka1vzo89cmkpmfj98xv82gxehe1i885rs74l7sh4t6ejawnr2zbs975yyew12lynmb457qt0y6kauvehlscktssqxboxp5j4pj4kw0n0oqkk5arwy1ff4d3oynt1bu2hz7jzsb8psol4svtlixmi97tetktjxwp8atevzvlnecwt3jnocsxu3v8klt88p5ysisnjwnc9k1vq25sle3wrn0xccskzyz49e0vj9ihywe02u9h7jvgt6yhgust4f0xhl590ji6vbh28w92q4fzjzvr2kje8kk156cg8zacyt8erv64j9t0cga0ls9ocvn3unmeb2h9phlhqojs9s1dzqxvq2uu5k1ghfiuzvmceh86e4wsh6p5c21lmmryjotl9qpe52bnwf9bkqwtg6s9cnk8a3q7jw65zjnv50j70gthy7320og3864dece1fadjuktncu0b4bkwpoib0nd0p5vzaqi5te5m387squ9t05yrdb0ahhg61s3c6wgcb9rfcvkoypcy95ejpamqzqabexjc1mnxnkkr0mpvb2txygaua6ajilhhvh4lb7w6rzoctfslpvgah3cl4fbg7asd57gq == \x\h\w\5\t\9\p\5\d\f\3\u\v\e\1\8\6\0\n\i\0\j\l\x\q\u\a\4\t\e\4\s\t\x\q\i\7\6\w\j\b\z\f\4\y\q\m\c\3\5\g\m\r\n\y\s\8\8\c\m\b\8\2\6\k\p\1\c\0\m\r\b\l\o\e\a\p\3\w\h\y\d\m\z\h\q\1\o\u\p\3\w\o\2\v\2\0\m\8\9\n\7\1\o\7\b\x\0\s\k\c\d\m\k\t\6\c\8\e\p\9\u\0\u\7\o\a\l\o\3\u\m\w\b\y\1\m\2\q\5\3\2\7\w\x\i\4\f\l\a\b\5\1\7\q\e\4\y\4\o\7\1\7\2\c\b\g\3\c\j\d\x\h\3\c\v\f\3\4\0\9\k\v\v\s\8\q\o\z\v\l\2\3\d\o\2\f\c\j\i\0\z\7\w\h\t\a\q\c\f\e\s\a\d\j\3\k\a\l\h\6\k\g\m\s\k\g\0\4\4\q\s\h\u\g\g\1\s\b\p\g\a\x\9\l\i\3\0\f\j\0\8\r\y\o\z\q\o\r\9\4\7\x\r\d\i\i\e\c\5\7\1\2\7\2\h\w\r\r\m\1\y\n\j\l\c\8\8\y\j\7\0\m\o\l\g\x\o\j\o\i\v\z\6\k\3\9\n\n\s\5\e\c\p\2\8\i\8\a\9\k\a\8\u\4\3\e\r\9\x\8\2\d\4\m\3\k\a\a\l\y\3\7\8\x\t\b\l\o\o\6\p\t\a\9\p\d\w\b\d\a\v\2\5\6\0\3\l\2\s\t\i\4\0\z\3\5\3\f\p\t\1\n\6\9\y\t\w\a\q\h\m\g\q\i\i\9\x\a\s\j\w\o\a\5\a\2\4\i\4\r\k\4\p\a\r\s\9\a\9\q\r\t\3\d\k\a\1\v\z\o\8\9\c\m\k\p\m\f\j\9\8\x\v\8\2\g\x\e\h\e\1\i\8\8\5\r\s\7\4\l\7\s\h\4\t\6\e\j\a\w\n\r\2\z\b\s\9\7\5\y\y\e\w\1\2\l\y\n\m\b\4\5\7\q\t\0\y\6\k\a\u\v\e\h\l\s\c\k\t\s\s\q\x\b\o\x\p\5\j\4\p\j\4\k\w\0\n\0\o\q\k\k\5\a\r\w\y\1\f\f\4\d\3\o\y\n\t\1\b\u\2\h\z\7\j\z\s\b\8\p\s\o\l\4\s\v\t\l\i\x\m\i\9\7\t\e\t\k\t\j\x\w\p\8\a\t\e\v\z\v\l\n\e\c\w\t\3\j\n\o\c\s\x\u\3\v\8\k\l\t\8\8\p\5\y\s\i\s\n\j\w\n\c\9\k\1\v\q\2\5\s\l\e\3\w\r\n\0\x\c\c\s\k\z\y\z\4\9\e\0\v\j\9\i\h\y\w\e\0\2\u\9\h\7\j\v\g\t\6\y\h\g\u\s\t\4\f\0\x\h\l\5\9\0\j\i\6\v\b\h\2\8\w\9\2\q\4\f\z\j\z\v\r\2\k\j\e\8\k\k\1\5\6\c\g\8\z\a\c\y\t\8\e\r\v\6\4\j\9\t\0\c\g\a\0\l\s\9\o\c\v\n\3\u\n\m\e\b\2\h\9\p\h\l\h\q\o\j\s\9\s\1\d\z\q\x\v\q\2\u\u\5\k\1\g\h\f\i\u\z\v\m\c\e\h\8\6\e\4\w\s\h\6\p\5\c\2\1\l\m\m\r\y\j\o\t\l\9\q\p\e\5\2\b\n\w\f\9\b\k\q\w\t\g\6\s\9\c\n\k\8\a\3\q\7\j\w\6\5\z\j\n\v\5\0\j\7\0\g\t\h\y\7\3\2\0\o\g\3\8\6\4\d\e\c\e\1\f\a\d\j\u\k\t\n\c\u\0\b\4\b\k\w\p\o\i\b\0\n\d\0\p\5\v\z\a\q\i\5\t\e\5\m\3\8\7\s\q\u\9\t\0\5\y\r\d\b\0\a\h\h\g\6\1\s\3\c\6\w\g\c\b\9\r\f\c\v\k\o\y\p\c\y\9\5\e\j\p\a\m\q\z\q\a\b\e\x\j\c\1\m\n\x\n\k\k\r\0\m\p\v\b\2\t\x\y\g\a\u\a\6\a\j\i\l\h\h\v\h\4\l\b\7\w\6\r\z\o\c\t\f\s\l\p\v\g\a\h\3\c\l\4\f\b\g\7\a\s\d\5\7\g\q ]] 00:07:36.745 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:37.312 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:37.312 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:37.312 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:37.312 19:46:05 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:37.312 [2024-07-24 19:46:05.804389] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:37.312 [2024-07-24 19:46:05.804473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63186 ] 00:07:37.312 { 00:07:37.312 "subsystems": [ 00:07:37.312 { 00:07:37.312 "subsystem": "bdev", 00:07:37.312 "config": [ 00:07:37.312 { 00:07:37.312 "params": { 00:07:37.312 "block_size": 512, 00:07:37.312 "num_blocks": 1048576, 00:07:37.312 "name": "malloc0" 00:07:37.312 }, 00:07:37.312 "method": "bdev_malloc_create" 00:07:37.312 }, 00:07:37.312 { 00:07:37.312 "params": { 00:07:37.312 "filename": "/dev/zram1", 00:07:37.312 "name": "uring0" 00:07:37.312 }, 00:07:37.312 "method": "bdev_uring_create" 00:07:37.312 }, 00:07:37.312 { 00:07:37.312 "method": "bdev_wait_for_examine" 00:07:37.312 } 00:07:37.312 ] 00:07:37.312 } 00:07:37.312 ] 00:07:37.312 } 00:07:37.312 [2024-07-24 19:46:05.943998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.571 [2024-07-24 19:46:06.040605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.571 [2024-07-24 19:46:06.103888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:41.659  Copying: 146/512 [MB] (146 MBps) Copying: 295/512 [MB] (149 MBps) Copying: 441/512 [MB] (145 MBps) Copying: 512/512 [MB] (average 147 MBps) 00:07:41.659 00:07:41.659 19:46:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:41.659 19:46:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:41.659 19:46:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:41.659 19:46:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:41.659 19:46:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:41.659 19:46:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:41.659 19:46:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.659 19:46:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:41.659 [2024-07-24 19:46:10.300971] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:41.659 [2024-07-24 19:46:10.301062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63253 ] 00:07:41.659 { 00:07:41.659 "subsystems": [ 00:07:41.659 { 00:07:41.659 "subsystem": "bdev", 00:07:41.659 "config": [ 00:07:41.659 { 00:07:41.659 "params": { 00:07:41.659 "block_size": 512, 00:07:41.659 "num_blocks": 1048576, 00:07:41.659 "name": "malloc0" 00:07:41.659 }, 00:07:41.659 "method": "bdev_malloc_create" 00:07:41.659 }, 00:07:41.659 { 00:07:41.659 "params": { 00:07:41.659 "filename": "/dev/zram1", 00:07:41.659 "name": "uring0" 00:07:41.659 }, 00:07:41.659 "method": "bdev_uring_create" 00:07:41.659 }, 00:07:41.659 { 00:07:41.659 "params": { 00:07:41.659 "name": "uring0" 00:07:41.659 }, 00:07:41.659 "method": "bdev_uring_delete" 00:07:41.659 }, 00:07:41.659 { 00:07:41.659 "method": "bdev_wait_for_examine" 00:07:41.659 } 00:07:41.659 ] 00:07:41.659 } 00:07:41.659 ] 00:07:41.659 } 00:07:41.918 [2024-07-24 19:46:10.439421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.918 [2024-07-24 19:46:10.553561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.177 [2024-07-24 19:46:10.610761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:42.744  Copying: 0/0 [B] (average 0 Bps) 00:07:42.744 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.744 19:46:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:42.744 [2024-07-24 19:46:11.339727] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:42.745 [2024-07-24 19:46:11.339837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63284 ] 00:07:42.745 { 00:07:42.745 "subsystems": [ 00:07:42.745 { 00:07:42.745 "subsystem": "bdev", 00:07:42.745 "config": [ 00:07:42.745 { 00:07:42.745 "params": { 00:07:42.745 "block_size": 512, 00:07:42.745 "num_blocks": 1048576, 00:07:42.745 "name": "malloc0" 00:07:42.745 }, 00:07:42.745 "method": "bdev_malloc_create" 00:07:42.745 }, 00:07:42.745 { 00:07:42.745 "params": { 00:07:42.745 "filename": "/dev/zram1", 00:07:42.745 "name": "uring0" 00:07:42.745 }, 00:07:42.745 "method": "bdev_uring_create" 00:07:42.745 }, 00:07:42.745 { 00:07:42.745 "params": { 00:07:42.745 "name": "uring0" 00:07:42.745 }, 00:07:42.745 "method": "bdev_uring_delete" 00:07:42.745 }, 00:07:42.745 { 00:07:42.745 "method": "bdev_wait_for_examine" 00:07:42.745 } 00:07:42.745 ] 00:07:42.745 } 00:07:42.745 ] 00:07:42.745 } 00:07:43.004 [2024-07-24 19:46:11.479020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.004 [2024-07-24 19:46:11.576247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.004 [2024-07-24 19:46:11.630084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.263 [2024-07-24 19:46:11.829221] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:43.263 [2024-07-24 19:46:11.829277] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:43.263 [2024-07-24 19:46:11.829288] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:43.263 [2024-07-24 19:46:11.829297] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.522 [2024-07-24 19:46:12.156598] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:43.780 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:44.039 00:07:44.039 real 0m16.156s 00:07:44.039 user 0m10.969s 00:07:44.039 sys 0m13.219s 00:07:44.039 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.039 19:46:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 ************************************ 00:07:44.039 END TEST dd_uring_copy 00:07:44.039 ************************************ 00:07:44.039 ************************************ 00:07:44.039 END TEST spdk_dd_uring 00:07:44.039 ************************************ 00:07:44.039 00:07:44.039 real 0m16.298s 00:07:44.039 user 0m11.020s 00:07:44.039 sys 0m13.309s 00:07:44.039 19:46:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.039 19:46:12 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 19:46:12 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:44.039 19:46:12 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.039 19:46:12 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.039 19:46:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:44.039 ************************************ 00:07:44.039 START TEST spdk_dd_sparse 00:07:44.039 ************************************ 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:44.039 * Looking for test storage... 00:07:44.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.039 19:46:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:44.040 1+0 records in 00:07:44.040 1+0 records out 00:07:44.040 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00664647 s, 631 MB/s 00:07:44.040 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:44.299 1+0 records in 00:07:44.299 1+0 records out 00:07:44.299 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00707808 s, 593 MB/s 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:44.299 1+0 records in 00:07:44.299 1+0 records out 00:07:44.299 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00763255 s, 550 MB/s 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:44.299 ************************************ 00:07:44.299 START TEST dd_sparse_file_to_file 00:07:44.299 ************************************ 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:44.299 19:46:12 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:44.299 [2024-07-24 19:46:12.792837] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:44.299 [2024-07-24 19:46:12.792943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63368 ] 00:07:44.299 { 00:07:44.299 "subsystems": [ 00:07:44.299 { 00:07:44.299 "subsystem": "bdev", 00:07:44.299 "config": [ 00:07:44.299 { 00:07:44.299 "params": { 00:07:44.299 "block_size": 4096, 00:07:44.299 "filename": "dd_sparse_aio_disk", 00:07:44.299 "name": "dd_aio" 00:07:44.299 }, 00:07:44.299 "method": "bdev_aio_create" 00:07:44.299 }, 00:07:44.299 { 00:07:44.299 "params": { 00:07:44.299 "lvs_name": "dd_lvstore", 00:07:44.299 "bdev_name": "dd_aio" 00:07:44.299 }, 00:07:44.299 "method": "bdev_lvol_create_lvstore" 00:07:44.299 }, 00:07:44.299 { 00:07:44.299 "method": "bdev_wait_for_examine" 00:07:44.299 } 00:07:44.299 ] 00:07:44.299 } 00:07:44.299 ] 00:07:44.299 } 00:07:44.299 [2024-07-24 19:46:12.934032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.558 [2024-07-24 19:46:13.068820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.558 [2024-07-24 19:46:13.128483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.091  Copying: 12/36 [MB] (average 1000 MBps) 00:07:45.091 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:45.091 00:07:45.091 real 0m0.789s 00:07:45.091 user 0m0.494s 00:07:45.091 sys 0m0.392s 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:45.091 ************************************ 00:07:45.091 END TEST dd_sparse_file_to_file 00:07:45.091 ************************************ 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:45.091 ************************************ 00:07:45.091 START TEST dd_sparse_file_to_bdev 00:07:45.091 ************************************ 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:45.091 19:46:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:45.091 [2024-07-24 19:46:13.630415] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:45.091 [2024-07-24 19:46:13.630528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63416 ] 00:07:45.091 { 00:07:45.091 "subsystems": [ 00:07:45.091 { 00:07:45.091 "subsystem": "bdev", 00:07:45.091 "config": [ 00:07:45.091 { 00:07:45.091 "params": { 00:07:45.091 "block_size": 4096, 00:07:45.091 "filename": "dd_sparse_aio_disk", 00:07:45.091 "name": "dd_aio" 00:07:45.091 }, 00:07:45.091 "method": "bdev_aio_create" 00:07:45.091 }, 00:07:45.091 { 00:07:45.091 "params": { 00:07:45.091 "lvs_name": "dd_lvstore", 00:07:45.091 "lvol_name": "dd_lvol", 00:07:45.091 "size_in_mib": 36, 00:07:45.091 "thin_provision": true 00:07:45.091 }, 00:07:45.091 "method": "bdev_lvol_create" 00:07:45.091 }, 00:07:45.091 { 00:07:45.091 "method": "bdev_wait_for_examine" 00:07:45.091 } 00:07:45.091 ] 00:07:45.091 } 00:07:45.091 ] 00:07:45.091 } 00:07:45.350 [2024-07-24 19:46:13.776731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.350 [2024-07-24 19:46:13.878195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.350 [2024-07-24 19:46:13.937084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.867  Copying: 12/36 [MB] (average 500 MBps) 00:07:45.867 00:07:45.867 00:07:45.867 real 0m0.708s 00:07:45.867 user 0m0.453s 00:07:45.867 sys 0m0.369s 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:45.868 ************************************ 00:07:45.868 END TEST dd_sparse_file_to_bdev 00:07:45.868 ************************************ 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:45.868 ************************************ 00:07:45.868 START TEST dd_sparse_bdev_to_file 00:07:45.868 ************************************ 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:45.868 19:46:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:45.868 { 00:07:45.868 "subsystems": [ 00:07:45.868 { 00:07:45.868 "subsystem": "bdev", 00:07:45.868 "config": [ 00:07:45.868 { 00:07:45.868 "params": { 00:07:45.868 "block_size": 4096, 00:07:45.868 "filename": "dd_sparse_aio_disk", 00:07:45.868 "name": "dd_aio" 00:07:45.868 }, 00:07:45.868 "method": "bdev_aio_create" 00:07:45.868 }, 00:07:45.868 { 00:07:45.868 "method": "bdev_wait_for_examine" 00:07:45.868 } 00:07:45.868 ] 00:07:45.868 } 00:07:45.868 ] 00:07:45.868 } 00:07:45.868 [2024-07-24 19:46:14.383951] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:45.868 [2024-07-24 19:46:14.384062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:07:45.868 [2024-07-24 19:46:14.521239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.127 [2024-07-24 19:46:14.632094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.127 [2024-07-24 19:46:14.690412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.386  Copying: 12/36 [MB] (average 923 MBps) 00:07:46.386 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:46.386 00:07:46.386 real 0m0.708s 00:07:46.386 user 0m0.442s 00:07:46.386 sys 0m0.354s 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.386 ************************************ 00:07:46.386 END TEST dd_sparse_bdev_to_file 00:07:46.386 19:46:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:46.386 ************************************ 00:07:46.646 19:46:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:46.646 19:46:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:46.646 19:46:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:46.646 19:46:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:46.646 19:46:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:46.646 00:07:46.646 real 0m2.502s 00:07:46.646 user 0m1.484s 00:07:46.646 sys 0m1.309s 00:07:46.646 19:46:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.646 19:46:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:46.646 ************************************ 00:07:46.646 END TEST spdk_dd_sparse 00:07:46.646 ************************************ 00:07:46.646 19:46:15 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:46.646 19:46:15 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.646 19:46:15 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.646 19:46:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:46.646 ************************************ 00:07:46.646 START TEST spdk_dd_negative 00:07:46.646 ************************************ 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:46.646 * Looking for test storage... 00:07:46.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.646 ************************************ 00:07:46.646 START TEST dd_invalid_arguments 00:07:46.646 ************************************ 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.646 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:46.905 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:46.905 00:07:46.905 CPU options: 00:07:46.905 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:46.905 (like [0,1,10]) 00:07:46.905 --lcores lcore to CPU mapping list. The list is in the format: 00:07:46.905 [<,lcores[@CPUs]>...] 00:07:46.905 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:46.905 Within the group, '-' is used for range separator, 00:07:46.905 ',' is used for single number separator. 00:07:46.905 '( )' can be omitted for single element group, 00:07:46.906 '@' can be omitted if cpus and lcores have the same value 00:07:46.906 --disable-cpumask-locks Disable CPU core lock files. 00:07:46.906 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:46.906 pollers in the app support interrupt mode) 00:07:46.906 -p, --main-core main (primary) core for DPDK 00:07:46.906 00:07:46.906 Configuration options: 00:07:46.906 -c, --config, --json JSON config file 00:07:46.906 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:46.906 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:46.906 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:46.906 --rpcs-allowed comma-separated list of permitted RPCS 00:07:46.906 --json-ignore-init-errors don't exit on invalid config entry 00:07:46.906 00:07:46.906 Memory options: 00:07:46.906 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:46.906 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:46.906 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:46.906 -R, --huge-unlink unlink huge files after initialization 00:07:46.906 -n, --mem-channels number of memory channels used for DPDK 00:07:46.906 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:46.906 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:46.906 --no-huge run without using hugepages 00:07:46.906 -i, --shm-id shared memory ID (optional) 00:07:46.906 -g, --single-file-segments force creating just one hugetlbfs file 00:07:46.906 00:07:46.906 PCI options: 00:07:46.906 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:46.906 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:46.906 -u, --no-pci disable PCI access 00:07:46.906 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:46.906 00:07:46.906 Log options: 00:07:46.906 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:46.906 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:46.906 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:46.906 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:46.906 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:46.906 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:46.906 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:46.906 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:46.906 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:46.906 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:46.906 virtio_vfio_user, vmd) 00:07:46.906 --silence-noticelog disable notice level logging to stderr 00:07:46.906 00:07:46.906 Trace options: 00:07:46.906 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:46.906 setting 0 to disable trace (default 32768) 00:07:46.906 Tracepoints vary in size and can use more than one trace entry. 00:07:46.906 -e, --tpoint-group [:] 00:07:46.906 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:46.906 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:46.906 [2024-07-24 19:46:15.321573] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:46.906 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:46.906 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:46.906 a tracepoint group. First tpoint inside a group can be enabled by 00:07:46.906 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:46.906 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:46.906 in /include/spdk_internal/trace_defs.h 00:07:46.906 00:07:46.906 Other options: 00:07:46.906 -h, --help show this usage 00:07:46.906 -v, --version print SPDK version 00:07:46.906 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:46.906 --env-context Opaque context for use of the env implementation 00:07:46.906 00:07:46.906 Application specific: 00:07:46.906 [--------- DD Options ---------] 00:07:46.906 --if Input file. Must specify either --if or --ib. 00:07:46.906 --ib Input bdev. Must specifier either --if or --ib 00:07:46.906 --of Output file. Must specify either --of or --ob. 00:07:46.906 --ob Output bdev. Must specify either --of or --ob. 00:07:46.906 --iflag Input file flags. 00:07:46.906 --oflag Output file flags. 00:07:46.906 --bs I/O unit size (default: 4096) 00:07:46.906 --qd Queue depth (default: 2) 00:07:46.906 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:46.906 --skip Skip this many I/O units at start of input. (default: 0) 00:07:46.906 --seek Skip this many I/O units at start of output. (default: 0) 00:07:46.906 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:46.906 --sparse Enable hole skipping in input target 00:07:46.906 Available iflag and oflag values: 00:07:46.906 append - append mode 00:07:46.906 direct - use direct I/O for data 00:07:46.906 directory - fail unless a directory 00:07:46.906 dsync - use synchronized I/O for data 00:07:46.906 noatime - do not update access time 00:07:46.906 noctty - do not assign controlling terminal from file 00:07:46.906 nofollow - do not follow symlinks 00:07:46.906 nonblock - use non-blocking I/O 00:07:46.906 sync - use synchronized I/O for data and metadata 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.906 00:07:46.906 real 0m0.076s 00:07:46.906 user 0m0.051s 00:07:46.906 sys 0m0.024s 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:46.906 ************************************ 00:07:46.906 END TEST dd_invalid_arguments 00:07:46.906 ************************************ 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.906 ************************************ 00:07:46.906 START TEST dd_double_input 00:07:46.906 ************************************ 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:46.906 [2024-07-24 19:46:15.446781] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:46.906 00:07:46.906 real 0m0.071s 00:07:46.906 user 0m0.048s 00:07:46.906 sys 0m0.022s 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:46.906 ************************************ 00:07:46.906 END TEST dd_double_input 00:07:46.906 ************************************ 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:46.906 ************************************ 00:07:46.906 START TEST dd_double_output 00:07:46.906 ************************************ 00:07:46.906 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.907 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:46.907 [2024-07-24 19:46:15.564352] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:47.165 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.166 00:07:47.166 real 0m0.072s 00:07:47.166 user 0m0.044s 00:07:47.166 sys 0m0.027s 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:47.166 ************************************ 00:07:47.166 END TEST dd_double_output 00:07:47.166 ************************************ 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:47.166 ************************************ 00:07:47.166 START TEST dd_no_input 00:07:47.166 ************************************ 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:47.166 [2024-07-24 19:46:15.686679] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.166 00:07:47.166 real 0m0.075s 00:07:47.166 user 0m0.045s 00:07:47.166 sys 0m0.029s 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:47.166 ************************************ 00:07:47.166 END TEST dd_no_input 00:07:47.166 ************************************ 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:47.166 ************************************ 00:07:47.166 START TEST dd_no_output 00:07:47.166 ************************************ 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.166 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.166 [2024-07-24 19:46:15.814511] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:47.425 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.426 00:07:47.426 real 0m0.077s 00:07:47.426 user 0m0.048s 00:07:47.426 sys 0m0.029s 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:47.426 ************************************ 00:07:47.426 END TEST dd_no_output 00:07:47.426 ************************************ 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:47.426 ************************************ 00:07:47.426 START TEST dd_wrong_blocksize 00:07:47.426 ************************************ 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:47.426 [2024-07-24 19:46:15.944044] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.426 00:07:47.426 real 0m0.084s 00:07:47.426 user 0m0.047s 00:07:47.426 sys 0m0.035s 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.426 19:46:15 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:47.426 ************************************ 00:07:47.426 END TEST dd_wrong_blocksize 00:07:47.426 ************************************ 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:47.426 ************************************ 00:07:47.426 START TEST dd_smaller_blocksize 00:07:47.426 ************************************ 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:47.426 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:47.426 [2024-07-24 19:46:16.077575] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:47.426 [2024-07-24 19:46:16.077702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63667 ] 00:07:47.685 [2024-07-24 19:46:16.214075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.685 [2024-07-24 19:46:16.331188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.943 [2024-07-24 19:46:16.388435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.201 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:48.201 [2024-07-24 19:46:16.705862] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:48.202 [2024-07-24 19:46:16.705983] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.202 [2024-07-24 19:46:16.830940] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.460 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:07:48.460 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.460 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:07:48.460 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:07:48.460 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.461 00:07:48.461 real 0m0.912s 00:07:48.461 user 0m0.419s 00:07:48.461 sys 0m0.385s 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:48.461 ************************************ 00:07:48.461 END TEST dd_smaller_blocksize 00:07:48.461 ************************************ 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:48.461 ************************************ 00:07:48.461 START TEST dd_invalid_count 00:07:48.461 ************************************ 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.461 19:46:16 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:48.461 [2024-07-24 19:46:17.043463] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.461 00:07:48.461 real 0m0.069s 00:07:48.461 user 0m0.043s 00:07:48.461 sys 0m0.024s 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:48.461 ************************************ 00:07:48.461 END TEST dd_invalid_count 00:07:48.461 ************************************ 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:48.461 ************************************ 00:07:48.461 START TEST dd_invalid_oflag 00:07:48.461 ************************************ 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.461 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:48.720 [2024-07-24 19:46:17.169086] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.720 00:07:48.720 real 0m0.071s 00:07:48.720 user 0m0.049s 00:07:48.720 sys 0m0.021s 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:48.720 ************************************ 00:07:48.720 END TEST dd_invalid_oflag 00:07:48.720 ************************************ 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:48.720 ************************************ 00:07:48.720 START TEST dd_invalid_iflag 00:07:48.720 ************************************ 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:48.720 [2024-07-24 19:46:17.296946] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.720 00:07:48.720 real 0m0.066s 00:07:48.720 user 0m0.035s 00:07:48.720 sys 0m0.030s 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.720 19:46:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:48.720 ************************************ 00:07:48.720 END TEST dd_invalid_iflag 00:07:48.721 ************************************ 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:48.721 ************************************ 00:07:48.721 START TEST dd_unknown_flag 00:07:48.721 ************************************ 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.721 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:48.979 [2024-07-24 19:46:17.433048] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:48.979 [2024-07-24 19:46:17.433178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63765 ] 00:07:48.979 [2024-07-24 19:46:17.571915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.238 [2024-07-24 19:46:17.684680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.238 [2024-07-24 19:46:17.739670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.238 [2024-07-24 19:46:17.774296] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:49.238 [2024-07-24 19:46:17.774371] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.238 [2024-07-24 19:46:17.774428] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:49.238 [2024-07-24 19:46:17.774442] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.238 [2024-07-24 19:46:17.774685] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:49.238 [2024-07-24 19:46:17.774702] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.238 [2024-07-24 19:46:17.774763] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:49.238 [2024-07-24 19:46:17.774775] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:49.238 [2024-07-24 19:46:17.889344] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.497 ************************************ 00:07:49.497 END TEST dd_unknown_flag 00:07:49.497 ************************************ 00:07:49.497 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:07:49.497 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.497 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:07:49.497 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:07:49.497 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:07:49.497 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.497 00:07:49.497 real 0m0.624s 00:07:49.497 user 0m0.355s 00:07:49.497 sys 0m0.171s 00:07:49.497 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.497 19:46:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:49.497 ************************************ 00:07:49.497 START TEST dd_invalid_json 00:07:49.497 ************************************ 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.497 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:49.497 [2024-07-24 19:46:18.107039] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:49.497 [2024-07-24 19:46:18.107149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63793 ] 00:07:49.756 [2024-07-24 19:46:18.246372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.756 [2024-07-24 19:46:18.356706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.756 [2024-07-24 19:46:18.356828] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:49.756 [2024-07-24 19:46:18.356843] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:49.756 [2024-07-24 19:46:18.356853] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.756 [2024-07-24 19:46:18.356890] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:07:50.015 ************************************ 00:07:50.015 END TEST dd_invalid_json 00:07:50.015 ************************************ 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:50.015 00:07:50.015 real 0m0.421s 00:07:50.015 user 0m0.234s 00:07:50.015 sys 0m0.083s 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:50.015 ************************************ 00:07:50.015 END TEST spdk_dd_negative 00:07:50.015 ************************************ 00:07:50.015 00:07:50.015 real 0m3.354s 00:07:50.015 user 0m1.640s 00:07:50.015 sys 0m1.350s 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.015 19:46:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.015 ************************************ 00:07:50.015 END TEST spdk_dd 00:07:50.015 ************************************ 00:07:50.015 00:07:50.015 real 1m19.824s 00:07:50.015 user 0m52.121s 00:07:50.015 sys 0m34.553s 00:07:50.015 19:46:18 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.015 19:46:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:50.015 19:46:18 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:50.015 19:46:18 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:50.015 19:46:18 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:50.015 19:46:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:50.015 19:46:18 -- common/autotest_common.sh@10 -- # set +x 00:07:50.015 19:46:18 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:50.015 19:46:18 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:50.015 19:46:18 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:50.015 19:46:18 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:50.015 19:46:18 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:50.015 19:46:18 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:50.015 19:46:18 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:50.015 19:46:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.015 19:46:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.015 19:46:18 -- common/autotest_common.sh@10 -- # set +x 00:07:50.015 ************************************ 00:07:50.015 START TEST nvmf_tcp 00:07:50.015 ************************************ 00:07:50.015 19:46:18 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:50.275 * Looking for test storage... 00:07:50.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:50.275 19:46:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:50.275 19:46:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:50.275 19:46:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:50.275 19:46:18 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.275 19:46:18 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.275 19:46:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.275 ************************************ 00:07:50.275 START TEST nvmf_target_core 00:07:50.275 ************************************ 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:50.275 * Looking for test storage... 00:07:50.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.275 ************************************ 00:07:50.275 START TEST nvmf_host_management 00:07:50.275 ************************************ 00:07:50.275 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:50.535 * Looking for test storage... 00:07:50.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.535 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:50.536 19:46:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:50.536 Cannot find device "nvmf_init_br" 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:50.536 Cannot find device "nvmf_tgt_br" 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:50.536 Cannot find device "nvmf_tgt_br2" 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:50.536 Cannot find device "nvmf_init_br" 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:50.536 Cannot find device "nvmf_tgt_br" 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:50.536 Cannot find device "nvmf_tgt_br2" 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:50.536 Cannot find device "nvmf_br" 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:50.536 Cannot find device "nvmf_init_if" 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:50.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:50.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:50.536 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:50.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:07:50.796 00:07:50.796 --- 10.0.0.2 ping statistics --- 00:07:50.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.796 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:50.796 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:50.796 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:07:50.796 00:07:50.796 --- 10.0.0.3 ping statistics --- 00:07:50.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.796 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:50.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:07:50.796 00:07:50.796 --- 10.0.0.1 ping statistics --- 00:07:50.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.796 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64084 00:07:50.796 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:50.797 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64084 00:07:50.797 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64084 ']' 00:07:50.797 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.797 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.797 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.797 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.797 19:46:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.055 [2024-07-24 19:46:19.496068] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:51.056 [2024-07-24 19:46:19.496199] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.056 [2024-07-24 19:46:19.641414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.314 [2024-07-24 19:46:19.777101] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.314 [2024-07-24 19:46:19.777411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.314 [2024-07-24 19:46:19.777520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.314 [2024-07-24 19:46:19.777625] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.314 [2024-07-24 19:46:19.777701] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.314 [2024-07-24 19:46:19.778129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.314 [2024-07-24 19:46:19.778305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.315 [2024-07-24 19:46:19.778530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:51.315 [2024-07-24 19:46:19.778534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.315 [2024-07-24 19:46:19.837905] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.935 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.935 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:51.935 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.935 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.935 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 [2024-07-24 19:46:20.616039] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 Malloc0 00:07:52.193 [2024-07-24 19:46:20.699106] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64146 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64146 /var/tmp/bdevperf.sock 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64146 ']' 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:52.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:52.193 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:52.194 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:52.194 { 00:07:52.194 "params": { 00:07:52.194 "name": "Nvme$subsystem", 00:07:52.194 "trtype": "$TEST_TRANSPORT", 00:07:52.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.194 "adrfam": "ipv4", 00:07:52.194 "trsvcid": "$NVMF_PORT", 00:07:52.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.194 "hdgst": ${hdgst:-false}, 00:07:52.194 "ddgst": ${ddgst:-false} 00:07:52.194 }, 00:07:52.194 "method": "bdev_nvme_attach_controller" 00:07:52.194 } 00:07:52.194 EOF 00:07:52.194 )") 00:07:52.194 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:52.194 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:52.194 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:52.194 19:46:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:52.194 "params": { 00:07:52.194 "name": "Nvme0", 00:07:52.194 "trtype": "tcp", 00:07:52.194 "traddr": "10.0.0.2", 00:07:52.194 "adrfam": "ipv4", 00:07:52.194 "trsvcid": "4420", 00:07:52.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.194 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:52.194 "hdgst": false, 00:07:52.194 "ddgst": false 00:07:52.194 }, 00:07:52.194 "method": "bdev_nvme_attach_controller" 00:07:52.194 }' 00:07:52.194 [2024-07-24 19:46:20.804540] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:52.194 [2024-07-24 19:46:20.804656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64146 ] 00:07:52.451 [2024-07-24 19:46:20.947904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.451 [2024-07-24 19:46:21.079357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.709 [2024-07-24 19:46:21.146678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.709 Running I/O for 10 seconds... 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.279 [2024-07-24 19:46:21.914166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.279 [2024-07-24 19:46:21.914220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.914237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.279 [2024-07-24 19:46:21.914247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.914258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.279 [2024-07-24 19:46:21.914268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.914278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:53.279 [2024-07-24 19:46:21.914287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.914297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113dd50 is same with the state(5) to be set 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.279 19:46:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:53.279 [2024-07-24 19:46:21.936035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.279 [2024-07-24 19:46:21.936668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.279 [2024-07-24 19:46:21.936677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.936980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.936991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.280 [2024-07-24 19:46:21.937531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:53.280 [2024-07-24 19:46:21.937541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1145ec0 is same with the state(5) to be set 00:07:53.280 [2024-07-24 19:46:21.937617] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1145ec0 was disconnected and freed. reset controller. 00:07:53.280 [2024-07-24 19:46:21.937719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x113dd50 (9): Bad file descriptor 00:07:53.280 [2024-07-24 19:46:21.938844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:53.280 task offset: 122880 on job bdev=Nvme0n1 fails 00:07:53.280 00:07:53.280 Latency(us) 00:07:53.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.280 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.280 Job: Nvme0n1 ended in about 0.66 seconds with error 00:07:53.280 Verification LBA range: start 0x0 length 0x400 00:07:53.280 Nvme0n1 : 0.66 1445.69 90.36 96.38 0.00 40433.08 2085.24 39559.91 00:07:53.280 =================================================================================================================== 00:07:53.281 Total : 1445.69 90.36 96.38 0.00 40433.08 2085.24 39559.91 00:07:53.281 [2024-07-24 19:46:21.941195] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.540 [2024-07-24 19:46:21.951549] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64146 00:07:54.475 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64146) - No such process 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:54.475 { 00:07:54.475 "params": { 00:07:54.475 "name": "Nvme$subsystem", 00:07:54.475 "trtype": "$TEST_TRANSPORT", 00:07:54.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:54.475 "adrfam": "ipv4", 00:07:54.475 "trsvcid": "$NVMF_PORT", 00:07:54.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:54.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:54.475 "hdgst": ${hdgst:-false}, 00:07:54.475 "ddgst": ${ddgst:-false} 00:07:54.475 }, 00:07:54.475 "method": "bdev_nvme_attach_controller" 00:07:54.475 } 00:07:54.475 EOF 00:07:54.475 )") 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:54.475 19:46:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:54.475 "params": { 00:07:54.475 "name": "Nvme0", 00:07:54.475 "trtype": "tcp", 00:07:54.475 "traddr": "10.0.0.2", 00:07:54.475 "adrfam": "ipv4", 00:07:54.475 "trsvcid": "4420", 00:07:54.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.475 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:54.475 "hdgst": false, 00:07:54.475 "ddgst": false 00:07:54.475 }, 00:07:54.475 "method": "bdev_nvme_attach_controller" 00:07:54.475 }' 00:07:54.475 [2024-07-24 19:46:22.983936] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:54.475 [2024-07-24 19:46:22.984022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64184 ] 00:07:54.475 [2024-07-24 19:46:23.117352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.734 [2024-07-24 19:46:23.235405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.734 [2024-07-24 19:46:23.298634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.993 Running I/O for 1 seconds... 00:07:55.929 00:07:55.929 Latency(us) 00:07:55.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.929 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:55.929 Verification LBA range: start 0x0 length 0x400 00:07:55.929 Nvme0n1 : 1.02 1501.53 93.85 0.00 0.00 41793.99 4498.15 38844.97 00:07:55.929 =================================================================================================================== 00:07:55.929 Total : 1501.53 93.85 0.00 0.00 41793.99 4498.15 38844.97 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.188 rmmod nvme_tcp 00:07:56.188 rmmod nvme_fabrics 00:07:56.188 rmmod nvme_keyring 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64084 ']' 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64084 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64084 ']' 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64084 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64084 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:56.188 killing process with pid 64084 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64084' 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64084 00:07:56.188 19:46:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64084 00:07:56.447 [2024-07-24 19:46:25.029779] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:56.447 00:07:56.447 real 0m6.237s 00:07:56.447 user 0m23.845s 00:07:56.447 sys 0m1.661s 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.447 ************************************ 00:07:56.447 END TEST nvmf_host_management 00:07:56.447 ************************************ 00:07:56.447 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.707 ************************************ 00:07:56.707 START TEST nvmf_lvol 00:07:56.707 ************************************ 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.707 * Looking for test storage... 00:07:56.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.707 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:56.708 Cannot find device "nvmf_tgt_br" 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:56.708 Cannot find device "nvmf_tgt_br2" 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:56.708 Cannot find device "nvmf_tgt_br" 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:56.708 Cannot find device "nvmf_tgt_br2" 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:56.708 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:56.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:56.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:56.967 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:56.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:07:56.968 00:07:56.968 --- 10.0.0.2 ping statistics --- 00:07:56.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.968 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:56.968 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:56.968 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:07:56.968 00:07:56.968 --- 10.0.0.3 ping statistics --- 00:07:56.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.968 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:56.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:07:56.968 00:07:56.968 --- 10.0.0.1 ping statistics --- 00:07:56.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.968 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.968 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.226 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:57.226 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.226 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:57.226 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:57.226 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=64402 00:07:57.226 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:57.226 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 64402 00:07:57.227 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 64402 ']' 00:07:57.227 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.227 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.227 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.227 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.227 19:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:57.227 [2024-07-24 19:46:25.699830] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:07:57.227 [2024-07-24 19:46:25.699934] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.227 [2024-07-24 19:46:25.838533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.485 [2024-07-24 19:46:25.972502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.485 [2024-07-24 19:46:25.972591] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.485 [2024-07-24 19:46:25.972605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.485 [2024-07-24 19:46:25.972616] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.485 [2024-07-24 19:46:25.972626] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.485 [2024-07-24 19:46:25.972795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.485 [2024-07-24 19:46:25.973495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.485 [2024-07-24 19:46:25.973512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.485 [2024-07-24 19:46:26.033930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:58.051 19:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.051 19:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:58.051 19:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.051 19:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:58.051 19:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:58.051 19:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.051 19:46:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:58.308 [2024-07-24 19:46:26.971788] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.606 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:58.864 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:58.864 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:59.121 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:59.121 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:59.379 19:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:59.637 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=431e26a1-050e-422c-a4b8-7830640d947d 00:07:59.637 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 431e26a1-050e-422c-a4b8-7830640d947d lvol 20 00:07:59.895 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=211c4a0d-f4b8-4656-bb47-607a8cd2b352 00:07:59.895 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:00.152 19:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 211c4a0d-f4b8-4656-bb47-607a8cd2b352 00:08:00.410 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:00.668 [2024-07-24 19:46:29.306950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.668 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.927 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=64483 00:08:00.927 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:00.927 19:46:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:02.304 19:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 211c4a0d-f4b8-4656-bb47-607a8cd2b352 MY_SNAPSHOT 00:08:02.304 19:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5c79fb5f-dcb2-4986-a82f-7ed97393678f 00:08:02.304 19:46:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 211c4a0d-f4b8-4656-bb47-607a8cd2b352 30 00:08:02.871 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5c79fb5f-dcb2-4986-a82f-7ed97393678f MY_CLONE 00:08:03.130 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=61f820ce-e890-44f4-a8dd-d7afc918d8fb 00:08:03.130 19:46:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 61f820ce-e890-44f4-a8dd-d7afc918d8fb 00:08:03.389 19:46:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 64483 00:08:11.505 Initializing NVMe Controllers 00:08:11.505 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:11.505 Controller IO queue size 128, less than required. 00:08:11.505 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:11.505 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:11.505 Initialization complete. Launching workers. 00:08:11.505 ======================================================== 00:08:11.505 Latency(us) 00:08:11.505 Device Information : IOPS MiB/s Average min max 00:08:11.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10033.90 39.19 12765.91 1902.88 53116.52 00:08:11.505 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10053.80 39.27 12732.72 201.89 96570.61 00:08:11.505 ======================================================== 00:08:11.505 Total : 20087.70 78.47 12749.30 201.89 96570.61 00:08:11.505 00:08:11.505 19:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:11.763 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 211c4a0d-f4b8-4656-bb47-607a8cd2b352 00:08:12.022 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 431e26a1-050e-422c-a4b8-7830640d947d 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.280 rmmod nvme_tcp 00:08:12.280 rmmod nvme_fabrics 00:08:12.280 rmmod nvme_keyring 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 64402 ']' 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 64402 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 64402 ']' 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 64402 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64402 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.280 killing process with pid 64402 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64402' 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 64402 00:08:12.280 19:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 64402 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.538 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:12.538 00:08:12.538 real 0m16.035s 00:08:12.538 user 1m6.456s 00:08:12.538 sys 0m4.348s 00:08:12.539 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.539 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:12.539 ************************************ 00:08:12.539 END TEST nvmf_lvol 00:08:12.539 ************************************ 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.798 ************************************ 00:08:12.798 START TEST nvmf_lvs_grow 00:08:12.798 ************************************ 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:12.798 * Looking for test storage... 00:08:12.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:12.798 Cannot find device "nvmf_tgt_br" 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:12.798 Cannot find device "nvmf_tgt_br2" 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:12.798 Cannot find device "nvmf_tgt_br" 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:12.798 Cannot find device "nvmf_tgt_br2" 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:12.798 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:13.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:13.057 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:13.057 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:13.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:13.058 00:08:13.058 --- 10.0.0.2 ping statistics --- 00:08:13.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.058 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:13.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:13.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:13.058 00:08:13.058 --- 10.0.0.3 ping statistics --- 00:08:13.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.058 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:13.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:08:13.058 00:08:13.058 --- 10.0.0.1 ping statistics --- 00:08:13.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.058 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=64804 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 64804 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 64804 ']' 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.058 19:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:13.316 [2024-07-24 19:46:41.748151] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:13.316 [2024-07-24 19:46:41.748250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.316 [2024-07-24 19:46:41.888333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.574 [2024-07-24 19:46:42.011288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.574 [2024-07-24 19:46:42.011386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.575 [2024-07-24 19:46:42.011400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.575 [2024-07-24 19:46:42.011411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.575 [2024-07-24 19:46:42.011420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.575 [2024-07-24 19:46:42.011451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.575 [2024-07-24 19:46:42.070640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:14.142 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.142 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:14.142 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:14.142 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.142 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:14.142 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.142 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:14.400 [2024-07-24 19:46:42.963869] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.400 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:14.400 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.400 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.400 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:14.400 ************************************ 00:08:14.400 START TEST lvs_grow_clean 00:08:14.400 ************************************ 00:08:14.400 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:14.400 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:14.401 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:14.401 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:14.401 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:14.401 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:14.401 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:14.401 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.401 19:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:14.401 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:14.659 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:14.659 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:14.918 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:14.918 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:14.918 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:15.191 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:15.191 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:15.191 19:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c lvol 150 00:08:15.450 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=69e93d40-c6af-4b6a-99b9-a1d476676b32 00:08:15.450 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:15.450 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:15.709 [2024-07-24 19:46:44.255630] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:15.709 [2024-07-24 19:46:44.255713] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:15.709 true 00:08:15.709 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:15.709 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:15.967 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:15.967 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:16.226 19:46:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 69e93d40-c6af-4b6a-99b9-a1d476676b32 00:08:16.485 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:16.743 [2024-07-24 19:46:45.252538] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.743 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=64886 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 64886 /var/tmp/bdevperf.sock 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 64886 ']' 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.002 19:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:17.002 [2024-07-24 19:46:45.607009] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:17.002 [2024-07-24 19:46:45.607121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64886 ] 00:08:17.261 [2024-07-24 19:46:45.741261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.261 [2024-07-24 19:46:45.850392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.261 [2024-07-24 19:46:45.903142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:18.196 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.196 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:18.197 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:18.456 Nvme0n1 00:08:18.456 19:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:18.714 [ 00:08:18.714 { 00:08:18.714 "name": "Nvme0n1", 00:08:18.714 "aliases": [ 00:08:18.714 "69e93d40-c6af-4b6a-99b9-a1d476676b32" 00:08:18.714 ], 00:08:18.714 "product_name": "NVMe disk", 00:08:18.714 "block_size": 4096, 00:08:18.714 "num_blocks": 38912, 00:08:18.714 "uuid": "69e93d40-c6af-4b6a-99b9-a1d476676b32", 00:08:18.714 "assigned_rate_limits": { 00:08:18.714 "rw_ios_per_sec": 0, 00:08:18.714 "rw_mbytes_per_sec": 0, 00:08:18.714 "r_mbytes_per_sec": 0, 00:08:18.714 "w_mbytes_per_sec": 0 00:08:18.714 }, 00:08:18.714 "claimed": false, 00:08:18.714 "zoned": false, 00:08:18.714 "supported_io_types": { 00:08:18.714 "read": true, 00:08:18.714 "write": true, 00:08:18.714 "unmap": true, 00:08:18.714 "flush": true, 00:08:18.714 "reset": true, 00:08:18.714 "nvme_admin": true, 00:08:18.714 "nvme_io": true, 00:08:18.714 "nvme_io_md": false, 00:08:18.714 "write_zeroes": true, 00:08:18.714 "zcopy": false, 00:08:18.714 "get_zone_info": false, 00:08:18.715 "zone_management": false, 00:08:18.715 "zone_append": false, 00:08:18.715 "compare": true, 00:08:18.715 "compare_and_write": true, 00:08:18.715 "abort": true, 00:08:18.715 "seek_hole": false, 00:08:18.715 "seek_data": false, 00:08:18.715 "copy": true, 00:08:18.715 "nvme_iov_md": false 00:08:18.715 }, 00:08:18.715 "memory_domains": [ 00:08:18.715 { 00:08:18.715 "dma_device_id": "system", 00:08:18.715 "dma_device_type": 1 00:08:18.715 } 00:08:18.715 ], 00:08:18.715 "driver_specific": { 00:08:18.715 "nvme": [ 00:08:18.715 { 00:08:18.715 "trid": { 00:08:18.715 "trtype": "TCP", 00:08:18.715 "adrfam": "IPv4", 00:08:18.715 "traddr": "10.0.0.2", 00:08:18.715 "trsvcid": "4420", 00:08:18.715 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:18.715 }, 00:08:18.715 "ctrlr_data": { 00:08:18.715 "cntlid": 1, 00:08:18.715 "vendor_id": "0x8086", 00:08:18.715 "model_number": "SPDK bdev Controller", 00:08:18.715 "serial_number": "SPDK0", 00:08:18.715 "firmware_revision": "24.09", 00:08:18.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.715 "oacs": { 00:08:18.715 "security": 0, 00:08:18.715 "format": 0, 00:08:18.715 "firmware": 0, 00:08:18.715 "ns_manage": 0 00:08:18.715 }, 00:08:18.715 "multi_ctrlr": true, 00:08:18.715 "ana_reporting": false 00:08:18.715 }, 00:08:18.715 "vs": { 00:08:18.715 "nvme_version": "1.3" 00:08:18.715 }, 00:08:18.715 "ns_data": { 00:08:18.715 "id": 1, 00:08:18.715 "can_share": true 00:08:18.715 } 00:08:18.715 } 00:08:18.715 ], 00:08:18.715 "mp_policy": "active_passive" 00:08:18.715 } 00:08:18.715 } 00:08:18.715 ] 00:08:18.715 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.715 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=64910 00:08:18.715 19:46:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:18.715 Running I/O for 10 seconds... 00:08:19.650 Latency(us) 00:08:19.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.650 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:19.650 =================================================================================================================== 00:08:19.650 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:19.650 00:08:20.592 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:20.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.850 Nvme0n1 : 2.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:20.850 =================================================================================================================== 00:08:20.850 Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:20.850 00:08:20.850 true 00:08:20.850 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:20.850 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:21.417 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:21.417 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:21.417 19:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 64910 00:08:21.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.676 Nvme0n1 : 3.00 7408.33 28.94 0.00 0.00 0.00 0.00 0.00 00:08:21.676 =================================================================================================================== 00:08:21.676 Total : 7408.33 28.94 0.00 0.00 0.00 0.00 0.00 00:08:21.676 00:08:23.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.053 Nvme0n1 : 4.00 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:08:23.053 =================================================================================================================== 00:08:23.053 Total : 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:08:23.053 00:08:23.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.673 Nvme0n1 : 5.00 7391.40 28.87 0.00 0.00 0.00 0.00 0.00 00:08:23.673 =================================================================================================================== 00:08:23.673 Total : 7391.40 28.87 0.00 0.00 0.00 0.00 0.00 00:08:23.673 00:08:25.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.064 Nvme0n1 : 6.00 7387.17 28.86 0.00 0.00 0.00 0.00 0.00 00:08:25.064 =================================================================================================================== 00:08:25.064 Total : 7387.17 28.86 0.00 0.00 0.00 0.00 0.00 00:08:25.064 00:08:25.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.631 Nvme0n1 : 7.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:25.631 =================================================================================================================== 00:08:25.631 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:25.631 00:08:27.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.007 Nvme0n1 : 8.00 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:08:27.007 =================================================================================================================== 00:08:27.007 Total : 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:08:27.007 00:08:27.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.945 Nvme0n1 : 9.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:27.945 =================================================================================================================== 00:08:27.945 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:27.945 00:08:28.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.880 Nvme0n1 : 10.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:08:28.880 =================================================================================================================== 00:08:28.880 Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:08:28.880 00:08:28.880 00:08:28.880 Latency(us) 00:08:28.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.880 Nvme0n1 : 10.01 7305.26 28.54 0.00 0.00 17515.33 14775.39 45041.11 00:08:28.880 =================================================================================================================== 00:08:28.880 Total : 7305.26 28.54 0.00 0.00 17515.33 14775.39 45041.11 00:08:28.880 0 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 64886 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 64886 ']' 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 64886 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64886 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:28.880 killing process with pid 64886 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64886' 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 64886 00:08:28.880 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.880 00:08:28.880 Latency(us) 00:08:28.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.880 =================================================================================================================== 00:08:28.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.880 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 64886 00:08:29.139 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.397 19:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.397 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:29.397 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.963 [2024-07-24 19:46:58.548448] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.963 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.964 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.964 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.964 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:29.964 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:29.964 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:30.221 request: 00:08:30.221 { 00:08:30.221 "uuid": "e778760b-165c-4d57-ae2d-8c0f2c58fb5c", 00:08:30.221 "method": "bdev_lvol_get_lvstores", 00:08:30.221 "req_id": 1 00:08:30.221 } 00:08:30.221 Got JSON-RPC error response 00:08:30.221 response: 00:08:30.221 { 00:08:30.221 "code": -19, 00:08:30.221 "message": "No such device" 00:08:30.221 } 00:08:30.221 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:30.221 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.221 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:30.221 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.221 19:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.480 aio_bdev 00:08:30.480 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 69e93d40-c6af-4b6a-99b9-a1d476676b32 00:08:30.480 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=69e93d40-c6af-4b6a-99b9-a1d476676b32 00:08:30.480 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:30.480 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:30.480 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:30.480 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:30.480 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:30.738 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 69e93d40-c6af-4b6a-99b9-a1d476676b32 -t 2000 00:08:30.996 [ 00:08:30.996 { 00:08:30.996 "name": "69e93d40-c6af-4b6a-99b9-a1d476676b32", 00:08:30.996 "aliases": [ 00:08:30.996 "lvs/lvol" 00:08:30.996 ], 00:08:30.996 "product_name": "Logical Volume", 00:08:30.996 "block_size": 4096, 00:08:30.996 "num_blocks": 38912, 00:08:30.996 "uuid": "69e93d40-c6af-4b6a-99b9-a1d476676b32", 00:08:30.996 "assigned_rate_limits": { 00:08:30.996 "rw_ios_per_sec": 0, 00:08:30.996 "rw_mbytes_per_sec": 0, 00:08:30.996 "r_mbytes_per_sec": 0, 00:08:30.996 "w_mbytes_per_sec": 0 00:08:30.996 }, 00:08:30.996 "claimed": false, 00:08:30.996 "zoned": false, 00:08:30.996 "supported_io_types": { 00:08:30.996 "read": true, 00:08:30.996 "write": true, 00:08:30.996 "unmap": true, 00:08:30.996 "flush": false, 00:08:30.996 "reset": true, 00:08:30.996 "nvme_admin": false, 00:08:30.996 "nvme_io": false, 00:08:30.996 "nvme_io_md": false, 00:08:30.996 "write_zeroes": true, 00:08:30.996 "zcopy": false, 00:08:30.996 "get_zone_info": false, 00:08:30.996 "zone_management": false, 00:08:30.996 "zone_append": false, 00:08:30.996 "compare": false, 00:08:30.996 "compare_and_write": false, 00:08:30.996 "abort": false, 00:08:30.996 "seek_hole": true, 00:08:30.996 "seek_data": true, 00:08:30.996 "copy": false, 00:08:30.996 "nvme_iov_md": false 00:08:30.996 }, 00:08:30.996 "driver_specific": { 00:08:30.996 "lvol": { 00:08:30.996 "lvol_store_uuid": "e778760b-165c-4d57-ae2d-8c0f2c58fb5c", 00:08:30.996 "base_bdev": "aio_bdev", 00:08:30.996 "thin_provision": false, 00:08:30.996 "num_allocated_clusters": 38, 00:08:30.996 "snapshot": false, 00:08:30.996 "clone": false, 00:08:30.996 "esnap_clone": false 00:08:30.996 } 00:08:30.996 } 00:08:30.996 } 00:08:30.996 ] 00:08:30.996 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:30.996 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:30.996 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:31.254 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:31.254 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:31.254 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:31.512 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:31.512 19:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 69e93d40-c6af-4b6a-99b9-a1d476676b32 00:08:31.812 19:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e778760b-165c-4d57-ae2d-8c0f2c58fb5c 00:08:32.070 19:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.328 19:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:32.585 ************************************ 00:08:32.585 END TEST lvs_grow_clean 00:08:32.585 ************************************ 00:08:32.585 00:08:32.585 real 0m18.221s 00:08:32.585 user 0m17.132s 00:08:32.585 sys 0m2.611s 00:08:32.585 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.585 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.843 ************************************ 00:08:32.843 START TEST lvs_grow_dirty 00:08:32.843 ************************************ 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:32.843 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.101 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:33.101 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:33.359 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:33.359 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:33.359 19:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:33.617 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:33.617 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:33.617 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d594ea0b-bded-4943-8188-aab6d8001bfc lvol 150 00:08:33.876 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=506e040e-0921-48a5-9a8c-5cb912f6cf7b 00:08:33.876 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:33.876 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:34.134 [2024-07-24 19:47:02.577677] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:34.134 [2024-07-24 19:47:02.577787] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:34.134 true 00:08:34.134 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:34.134 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:34.392 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:34.392 19:47:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.650 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 506e040e-0921-48a5-9a8c-5cb912f6cf7b 00:08:34.908 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:35.166 [2024-07-24 19:47:03.730384] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.166 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65159 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:35.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65159 /var/tmp/bdevperf.sock 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65159 ']' 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.441 19:47:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.441 [2024-07-24 19:47:04.047615] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:35.441 [2024-07-24 19:47:04.048679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65159 ] 00:08:35.726 [2024-07-24 19:47:04.187914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.726 [2024-07-24 19:47:04.317351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.726 [2024-07-24 19:47:04.375354] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:36.660 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.660 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:36.660 19:47:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:36.660 Nvme0n1 00:08:36.660 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:36.918 [ 00:08:36.918 { 00:08:36.918 "name": "Nvme0n1", 00:08:36.918 "aliases": [ 00:08:36.918 "506e040e-0921-48a5-9a8c-5cb912f6cf7b" 00:08:36.918 ], 00:08:36.918 "product_name": "NVMe disk", 00:08:36.918 "block_size": 4096, 00:08:36.918 "num_blocks": 38912, 00:08:36.918 "uuid": "506e040e-0921-48a5-9a8c-5cb912f6cf7b", 00:08:36.918 "assigned_rate_limits": { 00:08:36.918 "rw_ios_per_sec": 0, 00:08:36.918 "rw_mbytes_per_sec": 0, 00:08:36.918 "r_mbytes_per_sec": 0, 00:08:36.918 "w_mbytes_per_sec": 0 00:08:36.918 }, 00:08:36.918 "claimed": false, 00:08:36.918 "zoned": false, 00:08:36.918 "supported_io_types": { 00:08:36.918 "read": true, 00:08:36.918 "write": true, 00:08:36.918 "unmap": true, 00:08:36.918 "flush": true, 00:08:36.918 "reset": true, 00:08:36.918 "nvme_admin": true, 00:08:36.918 "nvme_io": true, 00:08:36.918 "nvme_io_md": false, 00:08:36.918 "write_zeroes": true, 00:08:36.918 "zcopy": false, 00:08:36.918 "get_zone_info": false, 00:08:36.918 "zone_management": false, 00:08:36.918 "zone_append": false, 00:08:36.918 "compare": true, 00:08:36.918 "compare_and_write": true, 00:08:36.918 "abort": true, 00:08:36.918 "seek_hole": false, 00:08:36.918 "seek_data": false, 00:08:36.918 "copy": true, 00:08:36.918 "nvme_iov_md": false 00:08:36.918 }, 00:08:36.918 "memory_domains": [ 00:08:36.918 { 00:08:36.918 "dma_device_id": "system", 00:08:36.918 "dma_device_type": 1 00:08:36.918 } 00:08:36.918 ], 00:08:36.918 "driver_specific": { 00:08:36.918 "nvme": [ 00:08:36.918 { 00:08:36.918 "trid": { 00:08:36.918 "trtype": "TCP", 00:08:36.918 "adrfam": "IPv4", 00:08:36.918 "traddr": "10.0.0.2", 00:08:36.918 "trsvcid": "4420", 00:08:36.918 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:36.918 }, 00:08:36.918 "ctrlr_data": { 00:08:36.918 "cntlid": 1, 00:08:36.918 "vendor_id": "0x8086", 00:08:36.918 "model_number": "SPDK bdev Controller", 00:08:36.918 "serial_number": "SPDK0", 00:08:36.918 "firmware_revision": "24.09", 00:08:36.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:36.918 "oacs": { 00:08:36.918 "security": 0, 00:08:36.918 "format": 0, 00:08:36.918 "firmware": 0, 00:08:36.918 "ns_manage": 0 00:08:36.918 }, 00:08:36.918 "multi_ctrlr": true, 00:08:36.918 "ana_reporting": false 00:08:36.918 }, 00:08:36.918 "vs": { 00:08:36.918 "nvme_version": "1.3" 00:08:36.918 }, 00:08:36.918 "ns_data": { 00:08:36.918 "id": 1, 00:08:36.918 "can_share": true 00:08:36.918 } 00:08:36.918 } 00:08:36.918 ], 00:08:36.918 "mp_policy": "active_passive" 00:08:36.918 } 00:08:36.918 } 00:08:36.918 ] 00:08:36.918 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65183 00:08:36.918 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:36.918 19:47:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:37.176 Running I/O for 10 seconds... 00:08:38.110 Latency(us) 00:08:38.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.110 Nvme0n1 : 1.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:38.110 =================================================================================================================== 00:08:38.110 Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:08:38.110 00:08:39.044 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:39.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.044 Nvme0n1 : 2.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:39.044 =================================================================================================================== 00:08:39.044 Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:08:39.044 00:08:39.302 true 00:08:39.302 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:39.302 19:47:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:39.561 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:39.561 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:39.561 19:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65183 00:08:40.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.127 Nvme0n1 : 3.00 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:40.127 =================================================================================================================== 00:08:40.127 Total : 7535.33 29.43 0.00 0.00 0.00 0.00 0.00 00:08:40.127 00:08:41.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.061 Nvme0n1 : 4.00 7524.75 29.39 0.00 0.00 0.00 0.00 0.00 00:08:41.061 =================================================================================================================== 00:08:41.061 Total : 7524.75 29.39 0.00 0.00 0.00 0.00 0.00 00:08:41.061 00:08:41.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.994 Nvme0n1 : 5.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:41.994 =================================================================================================================== 00:08:41.994 Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:08:41.994 00:08:42.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.988 Nvme0n1 : 6.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:42.988 =================================================================================================================== 00:08:42.988 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:42.988 00:08:44.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.362 Nvme0n1 : 7.00 7402.29 28.92 0.00 0.00 0.00 0.00 0.00 00:08:44.362 =================================================================================================================== 00:08:44.362 Total : 7402.29 28.92 0.00 0.00 0.00 0.00 0.00 00:08:44.362 00:08:45.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.295 Nvme0n1 : 8.00 7339.50 28.67 0.00 0.00 0.00 0.00 0.00 00:08:45.295 =================================================================================================================== 00:08:45.295 Total : 7339.50 28.67 0.00 0.00 0.00 0.00 0.00 00:08:45.295 00:08:46.229 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.229 Nvme0n1 : 9.00 7300.11 28.52 0.00 0.00 0.00 0.00 0.00 00:08:46.229 =================================================================================================================== 00:08:46.229 Total : 7300.11 28.52 0.00 0.00 0.00 0.00 0.00 00:08:46.229 00:08:47.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.187 Nvme0n1 : 10.00 7255.90 28.34 0.00 0.00 0.00 0.00 0.00 00:08:47.187 =================================================================================================================== 00:08:47.187 Total : 7255.90 28.34 0.00 0.00 0.00 0.00 0.00 00:08:47.187 00:08:47.187 00:08:47.187 Latency(us) 00:08:47.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.187 Nvme0n1 : 10.01 7261.27 28.36 0.00 0.00 17623.77 6106.76 71970.44 00:08:47.187 =================================================================================================================== 00:08:47.187 Total : 7261.27 28.36 0.00 0.00 17623.77 6106.76 71970.44 00:08:47.187 0 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65159 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 65159 ']' 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 65159 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65159 00:08:47.187 killing process with pid 65159 00:08:47.187 Received shutdown signal, test time was about 10.000000 seconds 00:08:47.187 00:08:47.187 Latency(us) 00:08:47.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.187 =================================================================================================================== 00:08:47.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65159' 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 65159 00:08:47.187 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 65159 00:08:47.445 19:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.703 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:47.961 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:47.961 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 64804 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 64804 00:08:48.219 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 64804 Killed "${NVMF_APP[@]}" "$@" 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=65321 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 65321 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 65321 ']' 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.219 19:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.219 [2024-07-24 19:47:16.779537] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:48.219 [2024-07-24 19:47:16.779612] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.477 [2024-07-24 19:47:16.917459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.477 [2024-07-24 19:47:17.023226] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.477 [2024-07-24 19:47:17.023283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.477 [2024-07-24 19:47:17.023294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.477 [2024-07-24 19:47:17.023302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.477 [2024-07-24 19:47:17.023309] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.477 [2024-07-24 19:47:17.023337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.477 [2024-07-24 19:47:17.075726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:49.411 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.411 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:49.411 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.411 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.411 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.411 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.411 19:47:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.411 [2024-07-24 19:47:17.959652] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:49.411 [2024-07-24 19:47:17.960127] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:49.411 [2024-07-24 19:47:17.960554] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:49.411 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:49.411 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 506e040e-0921-48a5-9a8c-5cb912f6cf7b 00:08:49.411 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=506e040e-0921-48a5-9a8c-5cb912f6cf7b 00:08:49.411 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:49.411 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:49.411 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:49.411 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:49.411 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:49.669 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 506e040e-0921-48a5-9a8c-5cb912f6cf7b -t 2000 00:08:49.928 [ 00:08:49.928 { 00:08:49.928 "name": "506e040e-0921-48a5-9a8c-5cb912f6cf7b", 00:08:49.928 "aliases": [ 00:08:49.928 "lvs/lvol" 00:08:49.928 ], 00:08:49.928 "product_name": "Logical Volume", 00:08:49.928 "block_size": 4096, 00:08:49.928 "num_blocks": 38912, 00:08:49.928 "uuid": "506e040e-0921-48a5-9a8c-5cb912f6cf7b", 00:08:49.928 "assigned_rate_limits": { 00:08:49.928 "rw_ios_per_sec": 0, 00:08:49.928 "rw_mbytes_per_sec": 0, 00:08:49.928 "r_mbytes_per_sec": 0, 00:08:49.928 "w_mbytes_per_sec": 0 00:08:49.928 }, 00:08:49.928 "claimed": false, 00:08:49.928 "zoned": false, 00:08:49.928 "supported_io_types": { 00:08:49.928 "read": true, 00:08:49.928 "write": true, 00:08:49.928 "unmap": true, 00:08:49.928 "flush": false, 00:08:49.928 "reset": true, 00:08:49.928 "nvme_admin": false, 00:08:49.928 "nvme_io": false, 00:08:49.928 "nvme_io_md": false, 00:08:49.928 "write_zeroes": true, 00:08:49.928 "zcopy": false, 00:08:49.928 "get_zone_info": false, 00:08:49.928 "zone_management": false, 00:08:49.928 "zone_append": false, 00:08:49.928 "compare": false, 00:08:49.928 "compare_and_write": false, 00:08:49.928 "abort": false, 00:08:49.928 "seek_hole": true, 00:08:49.928 "seek_data": true, 00:08:49.928 "copy": false, 00:08:49.928 "nvme_iov_md": false 00:08:49.928 }, 00:08:49.928 "driver_specific": { 00:08:49.928 "lvol": { 00:08:49.928 "lvol_store_uuid": "d594ea0b-bded-4943-8188-aab6d8001bfc", 00:08:49.928 "base_bdev": "aio_bdev", 00:08:49.928 "thin_provision": false, 00:08:49.928 "num_allocated_clusters": 38, 00:08:49.928 "snapshot": false, 00:08:49.928 "clone": false, 00:08:49.928 "esnap_clone": false 00:08:49.928 } 00:08:49.928 } 00:08:49.928 } 00:08:49.928 ] 00:08:49.928 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:49.928 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:49.928 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:50.186 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:50.186 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:50.186 19:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:50.444 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:50.444 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:50.702 [2024-07-24 19:47:19.253492] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:50.702 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:50.961 request: 00:08:50.961 { 00:08:50.961 "uuid": "d594ea0b-bded-4943-8188-aab6d8001bfc", 00:08:50.961 "method": "bdev_lvol_get_lvstores", 00:08:50.961 "req_id": 1 00:08:50.961 } 00:08:50.961 Got JSON-RPC error response 00:08:50.961 response: 00:08:50.961 { 00:08:50.961 "code": -19, 00:08:50.961 "message": "No such device" 00:08:50.961 } 00:08:50.961 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:50.961 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.961 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:50.961 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.961 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.221 aio_bdev 00:08:51.221 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 506e040e-0921-48a5-9a8c-5cb912f6cf7b 00:08:51.221 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=506e040e-0921-48a5-9a8c-5cb912f6cf7b 00:08:51.221 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:51.221 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:51.221 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:51.221 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:51.221 19:47:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:51.479 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 506e040e-0921-48a5-9a8c-5cb912f6cf7b -t 2000 00:08:51.738 [ 00:08:51.738 { 00:08:51.738 "name": "506e040e-0921-48a5-9a8c-5cb912f6cf7b", 00:08:51.738 "aliases": [ 00:08:51.738 "lvs/lvol" 00:08:51.738 ], 00:08:51.738 "product_name": "Logical Volume", 00:08:51.738 "block_size": 4096, 00:08:51.738 "num_blocks": 38912, 00:08:51.738 "uuid": "506e040e-0921-48a5-9a8c-5cb912f6cf7b", 00:08:51.738 "assigned_rate_limits": { 00:08:51.738 "rw_ios_per_sec": 0, 00:08:51.738 "rw_mbytes_per_sec": 0, 00:08:51.738 "r_mbytes_per_sec": 0, 00:08:51.738 "w_mbytes_per_sec": 0 00:08:51.738 }, 00:08:51.738 "claimed": false, 00:08:51.738 "zoned": false, 00:08:51.738 "supported_io_types": { 00:08:51.738 "read": true, 00:08:51.738 "write": true, 00:08:51.738 "unmap": true, 00:08:51.738 "flush": false, 00:08:51.738 "reset": true, 00:08:51.738 "nvme_admin": false, 00:08:51.738 "nvme_io": false, 00:08:51.738 "nvme_io_md": false, 00:08:51.738 "write_zeroes": true, 00:08:51.738 "zcopy": false, 00:08:51.738 "get_zone_info": false, 00:08:51.738 "zone_management": false, 00:08:51.738 "zone_append": false, 00:08:51.738 "compare": false, 00:08:51.738 "compare_and_write": false, 00:08:51.738 "abort": false, 00:08:51.738 "seek_hole": true, 00:08:51.738 "seek_data": true, 00:08:51.738 "copy": false, 00:08:51.738 "nvme_iov_md": false 00:08:51.738 }, 00:08:51.738 "driver_specific": { 00:08:51.738 "lvol": { 00:08:51.738 "lvol_store_uuid": "d594ea0b-bded-4943-8188-aab6d8001bfc", 00:08:51.738 "base_bdev": "aio_bdev", 00:08:51.738 "thin_provision": false, 00:08:51.738 "num_allocated_clusters": 38, 00:08:51.738 "snapshot": false, 00:08:51.738 "clone": false, 00:08:51.738 "esnap_clone": false 00:08:51.738 } 00:08:51.738 } 00:08:51.738 } 00:08:51.738 ] 00:08:51.738 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:51.738 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:51.738 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:51.997 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:51.997 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:51.997 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:52.255 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:52.255 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 506e040e-0921-48a5-9a8c-5cb912f6cf7b 00:08:52.514 19:47:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d594ea0b-bded-4943-8188-aab6d8001bfc 00:08:52.772 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.031 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:53.290 ************************************ 00:08:53.290 END TEST lvs_grow_dirty 00:08:53.290 ************************************ 00:08:53.290 00:08:53.290 real 0m20.617s 00:08:53.290 user 0m43.450s 00:08:53.290 sys 0m7.994s 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:53.290 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:53.290 nvmf_trace.0 00:08:53.549 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:53.549 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:53.549 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:53.549 19:47:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:53.549 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:53.549 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:53.549 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:53.549 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:53.549 rmmod nvme_tcp 00:08:53.549 rmmod nvme_fabrics 00:08:53.808 rmmod nvme_keyring 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 65321 ']' 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 65321 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 65321 ']' 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 65321 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65321 00:08:53.808 killing process with pid 65321 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65321' 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 65321 00:08:53.808 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 65321 00:08:54.069 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:54.070 ************************************ 00:08:54.070 END TEST nvmf_lvs_grow 00:08:54.070 ************************************ 00:08:54.070 00:08:54.070 real 0m41.287s 00:08:54.070 user 1m6.862s 00:08:54.070 sys 0m11.360s 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.070 ************************************ 00:08:54.070 START TEST nvmf_bdev_io_wait 00:08:54.070 ************************************ 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:54.070 * Looking for test storage... 00:08:54.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:54.070 Cannot find device "nvmf_tgt_br" 00:08:54.070 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:54.071 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.071 Cannot find device "nvmf_tgt_br2" 00:08:54.071 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:54.071 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:54.071 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:54.329 Cannot find device "nvmf_tgt_br" 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:54.329 Cannot find device "nvmf_tgt_br2" 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.329 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.590 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:54.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:08:54.590 00:08:54.590 --- 10.0.0.2 ping statistics --- 00:08:54.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.590 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:54.590 19:47:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:54.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:54.590 00:08:54.590 --- 10.0.0.3 ping statistics --- 00:08:54.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.590 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:54.590 00:08:54.590 --- 10.0.0.1 ping statistics --- 00:08:54.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.590 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=65634 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 65634 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 65634 ']' 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.590 19:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.590 [2024-07-24 19:47:23.085439] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:54.590 [2024-07-24 19:47:23.085506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.590 [2024-07-24 19:47:23.219265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.862 [2024-07-24 19:47:23.333102] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.862 [2024-07-24 19:47:23.333186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.862 [2024-07-24 19:47:23.333197] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.862 [2024-07-24 19:47:23.333204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.862 [2024-07-24 19:47:23.333211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.862 [2024-07-24 19:47:23.333390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.862 [2024-07-24 19:47:23.333966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.862 [2024-07-24 19:47:23.334073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.862 [2024-07-24 19:47:23.334079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.433 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 [2024-07-24 19:47:24.111908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 [2024-07-24 19:47:24.128135] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 Malloc0 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:55.693 [2024-07-24 19:47:24.193833] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=65669 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=65671 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:55.693 { 00:08:55.693 "params": { 00:08:55.693 "name": "Nvme$subsystem", 00:08:55.693 "trtype": "$TEST_TRANSPORT", 00:08:55.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.693 "adrfam": "ipv4", 00:08:55.693 "trsvcid": "$NVMF_PORT", 00:08:55.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.693 "hdgst": ${hdgst:-false}, 00:08:55.693 "ddgst": ${ddgst:-false} 00:08:55.693 }, 00:08:55.693 "method": "bdev_nvme_attach_controller" 00:08:55.693 } 00:08:55.693 EOF 00:08:55.693 )") 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=65673 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:55.693 { 00:08:55.693 "params": { 00:08:55.693 "name": "Nvme$subsystem", 00:08:55.693 "trtype": "$TEST_TRANSPORT", 00:08:55.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.693 "adrfam": "ipv4", 00:08:55.693 "trsvcid": "$NVMF_PORT", 00:08:55.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.693 "hdgst": ${hdgst:-false}, 00:08:55.693 "ddgst": ${ddgst:-false} 00:08:55.693 }, 00:08:55.693 "method": "bdev_nvme_attach_controller" 00:08:55.693 } 00:08:55.693 EOF 00:08:55.693 )") 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=65675 00:08:55.693 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:55.694 { 00:08:55.694 "params": { 00:08:55.694 "name": "Nvme$subsystem", 00:08:55.694 "trtype": "$TEST_TRANSPORT", 00:08:55.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.694 "adrfam": "ipv4", 00:08:55.694 "trsvcid": "$NVMF_PORT", 00:08:55.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.694 "hdgst": ${hdgst:-false}, 00:08:55.694 "ddgst": ${ddgst:-false} 00:08:55.694 }, 00:08:55.694 "method": "bdev_nvme_attach_controller" 00:08:55.694 } 00:08:55.694 EOF 00:08:55.694 )") 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:55.694 "params": { 00:08:55.694 "name": "Nvme1", 00:08:55.694 "trtype": "tcp", 00:08:55.694 "traddr": "10.0.0.2", 00:08:55.694 "adrfam": "ipv4", 00:08:55.694 "trsvcid": "4420", 00:08:55.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.694 "hdgst": false, 00:08:55.694 "ddgst": false 00:08:55.694 }, 00:08:55.694 "method": "bdev_nvme_attach_controller" 00:08:55.694 }' 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:55.694 { 00:08:55.694 "params": { 00:08:55.694 "name": "Nvme$subsystem", 00:08:55.694 "trtype": "$TEST_TRANSPORT", 00:08:55.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:55.694 "adrfam": "ipv4", 00:08:55.694 "trsvcid": "$NVMF_PORT", 00:08:55.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:55.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:55.694 "hdgst": ${hdgst:-false}, 00:08:55.694 "ddgst": ${ddgst:-false} 00:08:55.694 }, 00:08:55.694 "method": "bdev_nvme_attach_controller" 00:08:55.694 } 00:08:55.694 EOF 00:08:55.694 )") 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:55.694 "params": { 00:08:55.694 "name": "Nvme1", 00:08:55.694 "trtype": "tcp", 00:08:55.694 "traddr": "10.0.0.2", 00:08:55.694 "adrfam": "ipv4", 00:08:55.694 "trsvcid": "4420", 00:08:55.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.694 "hdgst": false, 00:08:55.694 "ddgst": false 00:08:55.694 }, 00:08:55.694 "method": "bdev_nvme_attach_controller" 00:08:55.694 }' 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:55.694 "params": { 00:08:55.694 "name": "Nvme1", 00:08:55.694 "trtype": "tcp", 00:08:55.694 "traddr": "10.0.0.2", 00:08:55.694 "adrfam": "ipv4", 00:08:55.694 "trsvcid": "4420", 00:08:55.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.694 "hdgst": false, 00:08:55.694 "ddgst": false 00:08:55.694 }, 00:08:55.694 "method": "bdev_nvme_attach_controller" 00:08:55.694 }' 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:55.694 "params": { 00:08:55.694 "name": "Nvme1", 00:08:55.694 "trtype": "tcp", 00:08:55.694 "traddr": "10.0.0.2", 00:08:55.694 "adrfam": "ipv4", 00:08:55.694 "trsvcid": "4420", 00:08:55.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:55.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:55.694 "hdgst": false, 00:08:55.694 "ddgst": false 00:08:55.694 }, 00:08:55.694 "method": "bdev_nvme_attach_controller" 00:08:55.694 }' 00:08:55.694 19:47:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 65669 00:08:55.694 [2024-07-24 19:47:24.258606] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:55.694 [2024-07-24 19:47:24.258690] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:55.694 [2024-07-24 19:47:24.259005] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:55.694 [2024-07-24 19:47:24.259071] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:55.694 [2024-07-24 19:47:24.275908] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:55.694 [2024-07-24 19:47:24.275970] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:55.694 [2024-07-24 19:47:24.290630] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:55.694 [2024-07-24 19:47:24.290715] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:55.953 [2024-07-24 19:47:24.474553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.953 [2024-07-24 19:47:24.551825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.953 [2024-07-24 19:47:24.573793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:56.211 [2024-07-24 19:47:24.620402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.211 [2024-07-24 19:47:24.622334] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.211 [2024-07-24 19:47:24.648812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:56.211 [2024-07-24 19:47:24.698394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.211 [2024-07-24 19:47:24.698982] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.211 [2024-07-24 19:47:24.715782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:56.211 Running I/O for 1 seconds... 00:08:56.212 [2024-07-24 19:47:24.763358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.212 [2024-07-24 19:47:24.784080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:56.212 Running I/O for 1 seconds... 00:08:56.212 [2024-07-24 19:47:24.828990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:56.471 Running I/O for 1 seconds... 00:08:56.471 Running I/O for 1 seconds... 00:08:57.407 00:08:57.407 Latency(us) 00:08:57.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.407 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:57.407 Nvme1n1 : 1.02 6110.73 23.87 0.00 0.00 20715.29 6672.76 36461.85 00:08:57.407 =================================================================================================================== 00:08:57.407 Total : 6110.73 23.87 0.00 0.00 20715.29 6672.76 36461.85 00:08:57.407 00:08:57.407 Latency(us) 00:08:57.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.407 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:57.407 Nvme1n1 : 1.01 9389.09 36.68 0.00 0.00 13567.94 8340.95 25499.46 00:08:57.407 =================================================================================================================== 00:08:57.407 Total : 9389.09 36.68 0.00 0.00 13567.94 8340.95 25499.46 00:08:57.407 00:08:57.407 Latency(us) 00:08:57.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.407 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:57.407 Nvme1n1 : 1.00 176866.67 690.89 0.00 0.00 720.94 342.57 1020.28 00:08:57.407 =================================================================================================================== 00:08:57.407 Total : 176866.67 690.89 0.00 0.00 720.94 342.57 1020.28 00:08:57.407 00:08:57.407 Latency(us) 00:08:57.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.407 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:57.407 Nvme1n1 : 1.01 6537.25 25.54 0.00 0.00 19501.25 7417.48 41943.04 00:08:57.407 =================================================================================================================== 00:08:57.407 Total : 6537.25 25.54 0.00 0.00 19501.25 7417.48 41943.04 00:08:57.407 19:47:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 65671 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 65673 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 65675 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.667 rmmod nvme_tcp 00:08:57.667 rmmod nvme_fabrics 00:08:57.667 rmmod nvme_keyring 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 65634 ']' 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 65634 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 65634 ']' 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 65634 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65634 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:57.667 killing process with pid 65634 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65634' 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 65634 00:08:57.667 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 65634 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:57.926 00:08:57.926 real 0m3.945s 00:08:57.926 user 0m17.352s 00:08:57.926 sys 0m2.178s 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 ************************************ 00:08:57.926 END TEST nvmf_bdev_io_wait 00:08:57.926 ************************************ 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.926 ************************************ 00:08:57.926 START TEST nvmf_queue_depth 00:08:57.926 ************************************ 00:08:57.926 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:58.185 * Looking for test storage... 00:08:58.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.185 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:58.186 Cannot find device "nvmf_tgt_br" 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.186 Cannot find device "nvmf_tgt_br2" 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:58.186 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:58.187 Cannot find device "nvmf_tgt_br" 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:58.187 Cannot find device "nvmf_tgt_br2" 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.187 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.187 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:58.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:58.446 00:08:58.446 --- 10.0.0.2 ping statistics --- 00:08:58.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.446 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:58.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:58.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:08:58.446 00:08:58.446 --- 10.0.0.3 ping statistics --- 00:08:58.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.446 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:58.446 19:47:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:58.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:08:58.446 00:08:58.446 --- 10.0.0.1 ping statistics --- 00:08:58.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.446 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=65904 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 65904 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 65904 ']' 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.446 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.447 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.447 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.447 19:47:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.447 [2024-07-24 19:47:27.081731] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:58.447 [2024-07-24 19:47:27.081831] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.705 [2024-07-24 19:47:27.216942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.705 [2024-07-24 19:47:27.328203] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.705 [2024-07-24 19:47:27.328259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.705 [2024-07-24 19:47:27.328271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.705 [2024-07-24 19:47:27.328280] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.705 [2024-07-24 19:47:27.328287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.705 [2024-07-24 19:47:27.328325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.964 [2024-07-24 19:47:27.384876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.532 [2024-07-24 19:47:28.096585] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.532 Malloc0 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.532 [2024-07-24 19:47:28.162668] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=65936 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 65936 /var/tmp/bdevperf.sock 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 65936 ']' 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.532 19:47:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 [2024-07-24 19:47:28.215848] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:08:59.791 [2024-07-24 19:47:28.215923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65936 ] 00:08:59.791 [2024-07-24 19:47:28.349846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.049 [2024-07-24 19:47:28.476278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.049 [2024-07-24 19:47:28.534510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.613 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.613 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:00.613 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:00.613 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.613 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:00.613 NVMe0n1 00:09:00.613 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.613 19:47:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:00.871 Running I/O for 10 seconds... 00:09:10.852 00:09:10.852 Latency(us) 00:09:10.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.852 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:10.852 Verification LBA range: start 0x0 length 0x4000 00:09:10.852 NVMe0n1 : 10.07 8351.10 32.62 0.00 0.00 122090.84 21924.77 94371.84 00:09:10.852 =================================================================================================================== 00:09:10.852 Total : 8351.10 32.62 0.00 0.00 122090.84 21924.77 94371.84 00:09:10.852 0 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 65936 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 65936 ']' 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 65936 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65936 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.852 killing process with pid 65936 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65936' 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 65936 00:09:10.852 Received shutdown signal, test time was about 10.000000 seconds 00:09:10.852 00:09:10.852 Latency(us) 00:09:10.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.852 =================================================================================================================== 00:09:10.852 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:10.852 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 65936 00:09:11.108 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:11.108 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:11.108 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.108 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:11.108 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.108 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:11.108 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.108 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.108 rmmod nvme_tcp 00:09:11.366 rmmod nvme_fabrics 00:09:11.366 rmmod nvme_keyring 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 65904 ']' 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 65904 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 65904 ']' 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 65904 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65904 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:11.366 killing process with pid 65904 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65904' 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 65904 00:09:11.366 19:47:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 65904 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:11.624 00:09:11.624 real 0m13.551s 00:09:11.624 user 0m23.537s 00:09:11.624 sys 0m2.184s 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:11.624 ************************************ 00:09:11.624 END TEST nvmf_queue_depth 00:09:11.624 ************************************ 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.624 ************************************ 00:09:11.624 START TEST nvmf_target_multipath 00:09:11.624 ************************************ 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:11.624 * Looking for test storage... 00:09:11.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:09:11.624 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:11.625 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.883 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:11.884 Cannot find device "nvmf_tgt_br" 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.884 Cannot find device "nvmf_tgt_br2" 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:11.884 Cannot find device "nvmf_tgt_br" 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:11.884 Cannot find device "nvmf_tgt_br2" 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:11.884 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:12.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:09:12.143 00:09:12.143 --- 10.0.0.2 ping statistics --- 00:09:12.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.143 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:12.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:12.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:12.143 00:09:12.143 --- 10.0.0.3 ping statistics --- 00:09:12.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.143 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:12.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:09:12.143 00:09:12.143 --- 10.0.0.1 ping statistics --- 00:09:12.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.143 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=66260 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 66260 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 66260 ']' 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.143 19:47:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:12.143 [2024-07-24 19:47:40.720946] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:09:12.143 [2024-07-24 19:47:40.721058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.402 [2024-07-24 19:47:40.858933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.402 [2024-07-24 19:47:40.972360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.402 [2024-07-24 19:47:40.972695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.402 [2024-07-24 19:47:40.972870] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.402 [2024-07-24 19:47:40.972924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.402 [2024-07-24 19:47:40.973023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.402 [2024-07-24 19:47:40.973156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.402 [2024-07-24 19:47:40.973790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.402 [2024-07-24 19:47:40.973910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.402 [2024-07-24 19:47:40.974040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.402 [2024-07-24 19:47:41.027399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:13.337 19:47:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.337 19:47:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:13.337 19:47:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.337 19:47:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.337 19:47:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:13.337 19:47:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.337 19:47:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:13.595 [2024-07-24 19:47:42.004917] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.595 19:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:13.853 Malloc0 00:09:13.853 19:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:14.111 19:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.370 19:47:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.633 [2024-07-24 19:47:43.167003] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.633 19:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:14.898 [2024-07-24 19:47:43.391229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:14.898 19:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid=69cdc0e8-4c23-4318-834b-1d87efff05de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:14.898 19:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid=69cdc0e8-4c23-4318-834b-1d87efff05de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:15.180 19:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.180 19:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:15.180 19:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.180 19:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:15.180 19:47:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:17.080 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=66355 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:17.081 19:47:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:17.081 [global] 00:09:17.081 thread=1 00:09:17.081 invalidate=1 00:09:17.081 rw=randrw 00:09:17.081 time_based=1 00:09:17.081 runtime=6 00:09:17.081 ioengine=libaio 00:09:17.081 direct=1 00:09:17.081 bs=4096 00:09:17.081 iodepth=128 00:09:17.081 norandommap=0 00:09:17.081 numjobs=1 00:09:17.081 00:09:17.081 verify_dump=1 00:09:17.081 verify_backlog=512 00:09:17.081 verify_state_save=0 00:09:17.081 do_verify=1 00:09:17.081 verify=crc32c-intel 00:09:17.081 [job0] 00:09:17.081 filename=/dev/nvme0n1 00:09:17.081 Could not set queue depth (nvme0n1) 00:09:17.338 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:17.338 fio-3.35 00:09:17.338 Starting 1 thread 00:09:18.271 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:18.530 19:47:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:18.788 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:18.788 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:18.788 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:18.788 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:18.788 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:18.788 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:18.789 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:18.789 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:18.789 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:18.789 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:18.789 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:18.789 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:18.789 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:19.048 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:19.306 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:19.307 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:19.307 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:19.307 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:19.307 19:47:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 66355 00:09:23.496 00:09:23.496 job0: (groupid=0, jobs=1): err= 0: pid=66382: Wed Jul 24 19:47:52 2024 00:09:23.496 read: IOPS=10.6k, BW=41.4MiB/s (43.5MB/s)(249MiB/6007msec) 00:09:23.496 slat (usec): min=2, max=5716, avg=54.43, stdev=211.50 00:09:23.496 clat (usec): min=988, max=14703, avg=8190.21, stdev=1400.65 00:09:23.496 lat (usec): min=1013, max=14714, avg=8244.64, stdev=1405.12 00:09:23.496 clat percentiles (usec): 00:09:23.496 | 1.00th=[ 4228], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 7504], 00:09:23.496 | 30.00th=[ 7701], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:23.496 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[11600], 00:09:23.496 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13566], 99.95th=[13829], 00:09:23.496 | 99.99th=[14484] 00:09:23.496 bw ( KiB/s): min= 5272, max=28872, per=51.99%, avg=22063.55, stdev=6950.17, samples=11 00:09:23.496 iops : min= 1318, max= 7218, avg=5515.82, stdev=1737.50, samples=11 00:09:23.496 write: IOPS=6324, BW=24.7MiB/s (25.9MB/s)(132MiB/5336msec); 0 zone resets 00:09:23.496 slat (usec): min=3, max=1740, avg=64.71, stdev=147.99 00:09:23.496 clat (usec): min=907, max=14459, avg=7109.66, stdev=1224.86 00:09:23.496 lat (usec): min=934, max=14494, avg=7174.38, stdev=1228.92 00:09:23.496 clat percentiles (usec): 00:09:23.496 | 1.00th=[ 3326], 5.00th=[ 4293], 10.00th=[ 5604], 20.00th=[ 6652], 00:09:23.496 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:09:23.496 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8356], 00:09:23.496 | 99.00th=[10945], 99.50th=[11600], 99.90th=[12780], 99.95th=[13042], 00:09:23.496 | 99.99th=[13829] 00:09:23.496 bw ( KiB/s): min= 5584, max=28152, per=87.38%, avg=22104.82, stdev=6796.34, samples=11 00:09:23.496 iops : min= 1396, max= 7038, avg=5526.18, stdev=1699.07, samples=11 00:09:23.496 lat (usec) : 1000=0.01% 00:09:23.496 lat (msec) : 2=0.03%, 4=1.57%, 10=93.12%, 20=5.27% 00:09:23.496 cpu : usr=6.08%, sys=23.09%, ctx=5702, majf=0, minf=114 00:09:23.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:23.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:23.496 issued rwts: total=63729,33746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:23.496 00:09:23.496 Run status group 0 (all jobs): 00:09:23.496 READ: bw=41.4MiB/s (43.5MB/s), 41.4MiB/s-41.4MiB/s (43.5MB/s-43.5MB/s), io=249MiB (261MB), run=6007-6007msec 00:09:23.496 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=132MiB (138MB), run=5336-5336msec 00:09:23.496 00:09:23.496 Disk stats (read/write): 00:09:23.496 nvme0n1: ios=63057/32849, merge=0/0, ticks=493324/218159, in_queue=711483, util=98.70% 00:09:23.496 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:23.754 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:24.012 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:24.013 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:24.013 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=66457 00:09:24.013 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:24.013 19:47:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:24.013 [global] 00:09:24.013 thread=1 00:09:24.013 invalidate=1 00:09:24.013 rw=randrw 00:09:24.013 time_based=1 00:09:24.013 runtime=6 00:09:24.013 ioengine=libaio 00:09:24.013 direct=1 00:09:24.013 bs=4096 00:09:24.013 iodepth=128 00:09:24.013 norandommap=0 00:09:24.013 numjobs=1 00:09:24.013 00:09:24.013 verify_dump=1 00:09:24.013 verify_backlog=512 00:09:24.013 verify_state_save=0 00:09:24.013 do_verify=1 00:09:24.013 verify=crc32c-intel 00:09:24.013 [job0] 00:09:24.013 filename=/dev/nvme0n1 00:09:24.284 Could not set queue depth (nvme0n1) 00:09:24.284 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:24.284 fio-3.35 00:09:24.284 Starting 1 thread 00:09:25.242 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:25.501 19:47:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:25.760 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:26.018 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:26.277 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:26.278 19:47:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 66457 00:09:30.464 00:09:30.464 job0: (groupid=0, jobs=1): err= 0: pid=66478: Wed Jul 24 19:47:58 2024 00:09:30.464 read: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(271MiB/6003msec) 00:09:30.464 slat (usec): min=2, max=6077, avg=43.79, stdev=189.56 00:09:30.464 clat (usec): min=229, max=14771, avg=7566.28, stdev=1942.35 00:09:30.464 lat (usec): min=258, max=14782, avg=7610.06, stdev=1957.21 00:09:30.464 clat percentiles (usec): 00:09:30.464 | 1.00th=[ 2900], 5.00th=[ 3982], 10.00th=[ 4817], 20.00th=[ 5866], 00:09:30.464 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8160], 00:09:30.464 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[10814], 00:09:30.464 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13566], 99.95th=[13698], 00:09:30.464 | 99.99th=[14091] 00:09:30.464 bw ( KiB/s): min=11704, max=41856, per=53.43%, avg=24715.64, stdev=8829.82, samples=11 00:09:30.464 iops : min= 2926, max=10464, avg=6178.91, stdev=2207.46, samples=11 00:09:30.464 write: IOPS=6913, BW=27.0MiB/s (28.3MB/s)(143MiB/5296msec); 0 zone resets 00:09:30.464 slat (usec): min=4, max=3501, avg=55.66, stdev=130.00 00:09:30.464 clat (usec): min=241, max=13967, avg=6461.07, stdev=1759.10 00:09:30.464 lat (usec): min=314, max=13993, avg=6516.73, stdev=1772.16 00:09:30.464 clat percentiles (usec): 00:09:30.464 | 1.00th=[ 2671], 5.00th=[ 3359], 10.00th=[ 3752], 20.00th=[ 4490], 00:09:30.464 | 30.00th=[ 5407], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7373], 00:09:30.464 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8160], 95.00th=[ 8455], 00:09:30.464 | 99.00th=[10290], 99.50th=[11338], 99.90th=[12387], 99.95th=[12911], 00:09:30.464 | 99.99th=[13698] 00:09:30.464 bw ( KiB/s): min=12288, max=40752, per=89.56%, avg=24766.55, stdev=8544.77, samples=11 00:09:30.464 iops : min= 3072, max=10188, avg=6191.64, stdev=2136.19, samples=11 00:09:30.464 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:30.464 lat (msec) : 2=0.18%, 4=7.72%, 10=87.98%, 20=4.08% 00:09:30.464 cpu : usr=6.33%, sys=25.37%, ctx=6238, majf=0, minf=114 00:09:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.464 issued rwts: total=69416,36614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.464 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.464 00:09:30.464 Run status group 0 (all jobs): 00:09:30.464 READ: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=271MiB (284MB), run=6003-6003msec 00:09:30.464 WRITE: bw=27.0MiB/s (28.3MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=143MiB (150MB), run=5296-5296msec 00:09:30.464 00:09:30.464 Disk stats (read/write): 00:09:30.464 nvme0n1: ios=68517/36102, merge=0/0, ticks=489090/213909, in_queue=702999, util=98.65% 00:09:30.464 19:47:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:30.464 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.464 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:30.464 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:30.464 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.464 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:30.464 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.464 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:30.464 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.723 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.723 rmmod nvme_tcp 00:09:30.723 rmmod nvme_fabrics 00:09:30.723 rmmod nvme_keyring 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 66260 ']' 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 66260 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 66260 ']' 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 66260 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66260 00:09:30.982 killing process with pid 66260 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66260' 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 66260 00:09:30.982 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 66260 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:31.241 ************************************ 00:09:31.241 END TEST nvmf_target_multipath 00:09:31.241 ************************************ 00:09:31.241 00:09:31.241 real 0m19.545s 00:09:31.241 user 1m13.710s 00:09:31.241 sys 0m9.809s 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:31.241 ************************************ 00:09:31.241 START TEST nvmf_zcopy 00:09:31.241 ************************************ 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:31.241 * Looking for test storage... 00:09:31.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.241 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:31.242 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:31.500 Cannot find device "nvmf_tgt_br" 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:31.500 Cannot find device "nvmf_tgt_br2" 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:31.500 Cannot find device "nvmf_tgt_br" 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:31.500 Cannot find device "nvmf_tgt_br2" 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:31.500 19:47:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:31.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:31.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:31.500 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:31.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:31.771 00:09:31.771 --- 10.0.0.2 ping statistics --- 00:09:31.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.771 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:31.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:09:31.771 00:09:31.771 --- 10.0.0.3 ping statistics --- 00:09:31.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.771 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:31.771 00:09:31.771 --- 10.0.0.1 ping statistics --- 00:09:31.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.771 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=66731 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 66731 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 66731 ']' 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.771 19:48:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.771 [2024-07-24 19:48:00.339875] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:09:31.771 [2024-07-24 19:48:00.339985] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.033 [2024-07-24 19:48:00.480593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.033 [2024-07-24 19:48:00.598679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.033 [2024-07-24 19:48:00.598780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.033 [2024-07-24 19:48:00.598794] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.033 [2024-07-24 19:48:00.598802] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.033 [2024-07-24 19:48:00.598810] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.033 [2024-07-24 19:48:00.598844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.033 [2024-07-24 19:48:00.654992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 [2024-07-24 19:48:01.355853] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 [2024-07-24 19:48:01.372001] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 malloc0 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:32.965 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:32.965 { 00:09:32.965 "params": { 00:09:32.965 "name": "Nvme$subsystem", 00:09:32.965 "trtype": "$TEST_TRANSPORT", 00:09:32.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.965 "adrfam": "ipv4", 00:09:32.965 "trsvcid": "$NVMF_PORT", 00:09:32.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.965 "hdgst": ${hdgst:-false}, 00:09:32.965 "ddgst": ${ddgst:-false} 00:09:32.965 }, 00:09:32.965 "method": "bdev_nvme_attach_controller" 00:09:32.965 } 00:09:32.965 EOF 00:09:32.965 )") 00:09:32.966 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:32.966 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:32.966 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:32.966 19:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:32.966 "params": { 00:09:32.966 "name": "Nvme1", 00:09:32.966 "trtype": "tcp", 00:09:32.966 "traddr": "10.0.0.2", 00:09:32.966 "adrfam": "ipv4", 00:09:32.966 "trsvcid": "4420", 00:09:32.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.966 "hdgst": false, 00:09:32.966 "ddgst": false 00:09:32.966 }, 00:09:32.966 "method": "bdev_nvme_attach_controller" 00:09:32.966 }' 00:09:32.966 [2024-07-24 19:48:01.465532] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:09:32.966 [2024-07-24 19:48:01.465657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66764 ] 00:09:32.966 [2024-07-24 19:48:01.607908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.223 [2024-07-24 19:48:01.723931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.223 [2024-07-24 19:48:01.789971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:33.481 Running I/O for 10 seconds... 00:09:43.452 00:09:43.452 Latency(us) 00:09:43.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.452 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:43.452 Verification LBA range: start 0x0 length 0x1000 00:09:43.452 Nvme1n1 : 10.01 6109.99 47.73 0.00 0.00 20881.69 1630.95 31695.59 00:09:43.452 =================================================================================================================== 00:09:43.452 Total : 6109.99 47.73 0.00 0.00 20881.69 1630.95 31695.59 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=66880 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:43.711 { 00:09:43.711 "params": { 00:09:43.711 "name": "Nvme$subsystem", 00:09:43.711 "trtype": "$TEST_TRANSPORT", 00:09:43.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.711 "adrfam": "ipv4", 00:09:43.711 "trsvcid": "$NVMF_PORT", 00:09:43.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.711 "hdgst": ${hdgst:-false}, 00:09:43.711 "ddgst": ${ddgst:-false} 00:09:43.711 }, 00:09:43.711 "method": "bdev_nvme_attach_controller" 00:09:43.711 } 00:09:43.711 EOF 00:09:43.711 )") 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:43.711 [2024-07-24 19:48:12.167924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.711 [2024-07-24 19:48:12.167977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:43.711 19:48:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:43.711 "params": { 00:09:43.711 "name": "Nvme1", 00:09:43.711 "trtype": "tcp", 00:09:43.711 "traddr": "10.0.0.2", 00:09:43.711 "adrfam": "ipv4", 00:09:43.711 "trsvcid": "4420", 00:09:43.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.711 "hdgst": false, 00:09:43.711 "ddgst": false 00:09:43.711 }, 00:09:43.711 "method": "bdev_nvme_attach_controller" 00:09:43.711 }' 00:09:43.711 [2024-07-24 19:48:12.179877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.711 [2024-07-24 19:48:12.179908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.191880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.191909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.203892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.203935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.215893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.215940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.217514] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:09:43.712 [2024-07-24 19:48:12.217603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66880 ] 00:09:43.712 [2024-07-24 19:48:12.227903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.227950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.239918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.239949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.251919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.251969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.263931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.263976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.275936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.275983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.287941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.287987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.299945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.299992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.311953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.311986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.323960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.323994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.335962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.335995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.347968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.347999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.355464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.712 [2024-07-24 19:48:12.359977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.360007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.712 [2024-07-24 19:48:12.372013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.712 [2024-07-24 19:48:12.372074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.384021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.384057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.396013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.396048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.408011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.408046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.420026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.420060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.432027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.432062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.444007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.444053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.456013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.456048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.468015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.468046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.474106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.971 [2024-07-24 19:48:12.480016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.480048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.492032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.492067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.504039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.504075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.516041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.516073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.528046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.528100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.538384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:43.971 [2024-07-24 19:48:12.540049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.540095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.552050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.552100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.564054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.564086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.576065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.576105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.588074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.588114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.600077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.600111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.612085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.612118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.624103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.624172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.971 [2024-07-24 19:48:12.636116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.971 [2024-07-24 19:48:12.636158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.229 [2024-07-24 19:48:12.648140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.229 [2024-07-24 19:48:12.648191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.229 Running I/O for 5 seconds... 00:09:44.229 [2024-07-24 19:48:12.660149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.229 [2024-07-24 19:48:12.660196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.229 [2024-07-24 19:48:12.678329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.229 [2024-07-24 19:48:12.678388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.229 [2024-07-24 19:48:12.692735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.229 [2024-07-24 19:48:12.692784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.229 [2024-07-24 19:48:12.707979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.229 [2024-07-24 19:48:12.708051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.229 [2024-07-24 19:48:12.718152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.229 [2024-07-24 19:48:12.718199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.229 [2024-07-24 19:48:12.732906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.229 [2024-07-24 19:48:12.732959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.229 [2024-07-24 19:48:12.747488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.747527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.763847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.763911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.780166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.780229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.797130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.797185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.813852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.813890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.830205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.830258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.847486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.847529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.864234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.864308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.880528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.880570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.230 [2024-07-24 19:48:12.890247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.230 [2024-07-24 19:48:12.890300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.488 [2024-07-24 19:48:12.904702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.488 [2024-07-24 19:48:12.904766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.488 [2024-07-24 19:48:12.919262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.488 [2024-07-24 19:48:12.919337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.488 [2024-07-24 19:48:12.934651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.488 [2024-07-24 19:48:12.934706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.488 [2024-07-24 19:48:12.943651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.488 [2024-07-24 19:48:12.943692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.488 [2024-07-24 19:48:12.960284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.488 [2024-07-24 19:48:12.960340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.488 [2024-07-24 19:48:12.977713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.488 [2024-07-24 19:48:12.977765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.488 [2024-07-24 19:48:12.993760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.488 [2024-07-24 19:48:12.993825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.488 [2024-07-24 19:48:13.012639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.488 [2024-07-24 19:48:13.012682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.489 [2024-07-24 19:48:13.027142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.489 [2024-07-24 19:48:13.027199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.489 [2024-07-24 19:48:13.042952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.489 [2024-07-24 19:48:13.043014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.489 [2024-07-24 19:48:13.060226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.489 [2024-07-24 19:48:13.060294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.489 [2024-07-24 19:48:13.076337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.489 [2024-07-24 19:48:13.076401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.489 [2024-07-24 19:48:13.093337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.489 [2024-07-24 19:48:13.093390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.489 [2024-07-24 19:48:13.109636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.489 [2024-07-24 19:48:13.109711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.489 [2024-07-24 19:48:13.126731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.489 [2024-07-24 19:48:13.126808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.489 [2024-07-24 19:48:13.143041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.489 [2024-07-24 19:48:13.143079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.161101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.161152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.175890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.175942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.185810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.185847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.201392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.201431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.219174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.219225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.234673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.234727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.251486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.251558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.268437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.268508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.284398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.284454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.294428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.294490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.310750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.310815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.327489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.327530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.343995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.344035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.360429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.360483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.377523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.377564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.395906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.395960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.747 [2024-07-24 19:48:13.406322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.747 [2024-07-24 19:48:13.406371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.421025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.421069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.439047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.439092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.453328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.453366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.469007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.469045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.487433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.487471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.502045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.502094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.518050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.518128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.535541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.535629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.552274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.552328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.567722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.567809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.584161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.584215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.601343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.601387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.618121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.618168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.633033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.633085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.649422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.649461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.006 [2024-07-24 19:48:13.666407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.006 [2024-07-24 19:48:13.666489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.682416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.682475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.698875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.698930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.716544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.716597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.731539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.731601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.742020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.742055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.756463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.756516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.772804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.772854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.789396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.789436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.806913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.806980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.823073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.823137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.840868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.840917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.856263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.856320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.874463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.874514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.890505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.890551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.900403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.900448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.265 [2024-07-24 19:48:13.916174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.265 [2024-07-24 19:48:13.916226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:13.933061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:13.933110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:13.949602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:13.949687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:13.966342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:13.966417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:13.982969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:13.983042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:13.999294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:13.999360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.016783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.016867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.031621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.031690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.048584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.048642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.063521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.063578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.079271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.079336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.095945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.096004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.113893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.113954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.128488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.128527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.144257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.144311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.161968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.162050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.177578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.177618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.524 [2024-07-24 19:48:14.187007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.524 [2024-07-24 19:48:14.187077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.783 [2024-07-24 19:48:14.204155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.783 [2024-07-24 19:48:14.204230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.783 [2024-07-24 19:48:14.219848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.783 [2024-07-24 19:48:14.219922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.783 [2024-07-24 19:48:14.238516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.783 [2024-07-24 19:48:14.238596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.253628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.253698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.263117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.263189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.279415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.279498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.295470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.295534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.313232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.313322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.327662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.327731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.345366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.345406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.360267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.360330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.370253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.370323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.385737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.385833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.402439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.402529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.419659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.419719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.784 [2024-07-24 19:48:14.436663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.784 [2024-07-24 19:48:14.436722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.453300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.453377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.469712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.469798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.487186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.487274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.502865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.502922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.520770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.520850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.535945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.536012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.554272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.554336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.569173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.569229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.587367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.587497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.601364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.601419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.617614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.617708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.633380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.633444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.643333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.643388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.659609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.659665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.675145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.675218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.685586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.685638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.043 [2024-07-24 19:48:14.700093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.043 [2024-07-24 19:48:14.700175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.302 [2024-07-24 19:48:14.716817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.302 [2024-07-24 19:48:14.716886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.302 [2024-07-24 19:48:14.733453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.302 [2024-07-24 19:48:14.733511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.302 [2024-07-24 19:48:14.749452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.302 [2024-07-24 19:48:14.749518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.302 [2024-07-24 19:48:14.766326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.302 [2024-07-24 19:48:14.766416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.302 [2024-07-24 19:48:14.783271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.302 [2024-07-24 19:48:14.783327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.800387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.800447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.815611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.815674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.825444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.825495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.841607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.841654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.856387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.856457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.871938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.871991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.882142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.882207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.897215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.897270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.912910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.912960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.922231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.922283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.939080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.939133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.954891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.954931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.303 [2024-07-24 19:48:14.964140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.303 [2024-07-24 19:48:14.964193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:14.980620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:14.980660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:14.997631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:14.997670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.013987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.014061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.023523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.023577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.038478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.038531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.053434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.053473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.069127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.069197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.087616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.087706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.102670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.102731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.119939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.119992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.136294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.136348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.152797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.152856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.169017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.169077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.187893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.187948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.202694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.202781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.212909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.212963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.562 [2024-07-24 19:48:15.228468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.562 [2024-07-24 19:48:15.228540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.244419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.244474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.253692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.253757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.269502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.269541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.287214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.287253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.303017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.303078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.320081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.320133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.337069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.337122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.353179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.353240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.369616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.369663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.386493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.386547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.404231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.404294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.419418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.419474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.436348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.436398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.451938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.451973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.461538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.461572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.822 [2024-07-24 19:48:15.477927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.822 [2024-07-24 19:48:15.477963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-07-24 19:48:15.495543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-07-24 19:48:15.495592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-07-24 19:48:15.510714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-07-24 19:48:15.510782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-07-24 19:48:15.520446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-07-24 19:48:15.520500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.081 [2024-07-24 19:48:15.536543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.081 [2024-07-24 19:48:15.536596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.552411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.552503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.567695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.567800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.577158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.577212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.593999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.594040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.611753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.611800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.627247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.627302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.645100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.645148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.659994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.660034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.669511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.669550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.685563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.685601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.701630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.701716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.719440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.719498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.082 [2024-07-24 19:48:15.734356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.082 [2024-07-24 19:48:15.734412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-07-24 19:48:15.752077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-07-24 19:48:15.752159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.340 [2024-07-24 19:48:15.767266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.340 [2024-07-24 19:48:15.767329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.777057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.777117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.793026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.793085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.809089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.809162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.828456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.828496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.843460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.843511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.853481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.853518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.868270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.868307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.883857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.883894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.899823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.899861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.915649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.915709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.932605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.932675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.951059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.951124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.966473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.966560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.976084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.976136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:15.991325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:15.991382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.341 [2024-07-24 19:48:16.006670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.341 [2024-07-24 19:48:16.006719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.599 [2024-07-24 19:48:16.016099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.599 [2024-07-24 19:48:16.016157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.599 [2024-07-24 19:48:16.032227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.599 [2024-07-24 19:48:16.032291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.599 [2024-07-24 19:48:16.047993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.599 [2024-07-24 19:48:16.048053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.599 [2024-07-24 19:48:16.057005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.599 [2024-07-24 19:48:16.057058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.073618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.073686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.090778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.090844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.107032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.107074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.124678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.124768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.140074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.140160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.155678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.155779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.174463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.174518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.189103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.189172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.204788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.204835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.223280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.223335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.237651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.237763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.600 [2024-07-24 19:48:16.253107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.600 [2024-07-24 19:48:16.253176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.271946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.272001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.286387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.286440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.296040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.296110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.311452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.311505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.328621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.328675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.344882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.344934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.363637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.363691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.378805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.378869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.396277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.396347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.411206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.411287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.420482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.420553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.436676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.436727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.446293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.446346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.462605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.462657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.472763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.472807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.487696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.487775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.503970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.504027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.859 [2024-07-24 19:48:16.522745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.859 [2024-07-24 19:48:16.522828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.537022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.537082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.553441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.553484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.571414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.571472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.586682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.586748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.603041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.603097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.621831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.621888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.636335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.636409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.646136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.646191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.661687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.661728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.676409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.676462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.692082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.692122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.710145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.710216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.724965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.725025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.740740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.740827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.757693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.757764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.119 [2024-07-24 19:48:16.775126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.119 [2024-07-24 19:48:16.775178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.790328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.790382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.799921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.799978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.814922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.814975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.830715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.830818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.847888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.847931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.862875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.862925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.879714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.879779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.896615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.896654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.913500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.913561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.928989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.929041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.945027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.945082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.962666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.962703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.978148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.978186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:16.995643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:16.995680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:17.010687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:17.010728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:17.020205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:17.020242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.378 [2024-07-24 19:48:17.036705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.378 [2024-07-24 19:48:17.036801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.636 [2024-07-24 19:48:17.053420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.636 [2024-07-24 19:48:17.053476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.636 [2024-07-24 19:48:17.071468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.636 [2024-07-24 19:48:17.071531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.636 [2024-07-24 19:48:17.086264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.636 [2024-07-24 19:48:17.086320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.636 [2024-07-24 19:48:17.103544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.636 [2024-07-24 19:48:17.103601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.636 [2024-07-24 19:48:17.120632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.120700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.137456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.137497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.154231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.154285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.171319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.171375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.186968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.187024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.196369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.196423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.212074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.212154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.229452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.229492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.244925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.244996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.254440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.254493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.269821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.269877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.286700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.286783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.637 [2024-07-24 19:48:17.302641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.637 [2024-07-24 19:48:17.302705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.318893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.318951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.336635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.336690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.351163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.351217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.366646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.366699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.375991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.376043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.392227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.392284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.406976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.407028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.416180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.416236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.432721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.432816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.449552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.449597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.467054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.467111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.481817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.481855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.497640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.497673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.515300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.515366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.532073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.532128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.897 [2024-07-24 19:48:17.547736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.897 [2024-07-24 19:48:17.547817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-07-24 19:48:17.565063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-07-24 19:48:17.565107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-07-24 19:48:17.581039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-07-24 19:48:17.581131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-07-24 19:48:17.597845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-07-24 19:48:17.597918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-07-24 19:48:17.614939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-07-24 19:48:17.615005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-07-24 19:48:17.631531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-07-24 19:48:17.631594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-07-24 19:48:17.648365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-07-24 19:48:17.648418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 [2024-07-24 19:48:17.664546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.161 [2024-07-24 19:48:17.664606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.161 00:09:49.161 Latency(us) 00:09:49.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.161 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:49.162 Nvme1n1 : 5.01 11618.44 90.77 0.00 0.00 11002.14 4766.25 20375.74 00:09:49.162 =================================================================================================================== 00:09:49.162 Total : 11618.44 90.77 0.00 0.00 11002.14 4766.25 20375.74 00:09:49.162 [2024-07-24 19:48:17.676578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.676632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.688568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.688614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.700604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.700654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.712609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.712676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.724626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.724687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.736614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.736673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.748617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.748672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.760621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.760685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.772644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.772718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.784653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.784693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.796664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.796708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.808654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.808693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.162 [2024-07-24 19:48:17.820649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.162 [2024-07-24 19:48:17.820682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.420 [2024-07-24 19:48:17.832677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.420 [2024-07-24 19:48:17.832718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.420 [2024-07-24 19:48:17.844671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.420 [2024-07-24 19:48:17.844730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.420 [2024-07-24 19:48:17.856667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.420 [2024-07-24 19:48:17.856705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.420 [2024-07-24 19:48:17.868678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.420 [2024-07-24 19:48:17.868733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.420 [2024-07-24 19:48:17.880677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.420 [2024-07-24 19:48:17.880760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.420 [2024-07-24 19:48:17.892664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.420 [2024-07-24 19:48:17.892711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.420 [2024-07-24 19:48:17.904669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.420 [2024-07-24 19:48:17.904710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.420 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (66880) - No such process 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 66880 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.420 delay0 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.420 19:48:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:49.678 [2024-07-24 19:48:18.113761] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:56.240 Initializing NVMe Controllers 00:09:56.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:56.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:56.240 Initialization complete. Launching workers. 00:09:56.240 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 70 00:09:56.240 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 357, failed to submit 33 00:09:56.240 success 228, unsuccess 129, failed 0 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.240 rmmod nvme_tcp 00:09:56.240 rmmod nvme_fabrics 00:09:56.240 rmmod nvme_keyring 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 66731 ']' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 66731 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 66731 ']' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 66731 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66731 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:56.240 killing process with pid 66731 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66731' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 66731 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 66731 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:56.240 00:09:56.240 real 0m24.785s 00:09:56.240 user 0m40.664s 00:09:56.240 sys 0m6.853s 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 ************************************ 00:09:56.240 END TEST nvmf_zcopy 00:09:56.240 ************************************ 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.240 ************************************ 00:09:56.240 START TEST nvmf_nmic 00:09:56.240 ************************************ 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.240 * Looking for test storage... 00:09:56.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.240 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:56.241 Cannot find device "nvmf_tgt_br" 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.241 Cannot find device "nvmf_tgt_br2" 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:56.241 Cannot find device "nvmf_tgt_br" 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:56.241 Cannot find device "nvmf_tgt_br2" 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.241 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.241 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:56.500 19:48:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:56.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:09:56.500 00:09:56.500 --- 10.0.0.2 ping statistics --- 00:09:56.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.500 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:56.500 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:56.500 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:56.500 00:09:56.500 --- 10.0.0.3 ping statistics --- 00:09:56.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.500 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:56.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:56.500 00:09:56.500 --- 10.0.0.1 ping statistics --- 00:09:56.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.500 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=67207 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 67207 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 67207 ']' 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.500 19:48:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:56.500 [2024-07-24 19:48:25.145607] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:09:56.500 [2024-07-24 19:48:25.145706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.790 [2024-07-24 19:48:25.289332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.790 [2024-07-24 19:48:25.418581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.790 [2024-07-24 19:48:25.418665] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.790 [2024-07-24 19:48:25.418680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.790 [2024-07-24 19:48:25.418690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.790 [2024-07-24 19:48:25.418699] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.790 [2024-07-24 19:48:25.419889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.790 [2024-07-24 19:48:25.420020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.790 [2024-07-24 19:48:25.420116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.790 [2024-07-24 19:48:25.420126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.049 [2024-07-24 19:48:25.476976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.616 [2024-07-24 19:48:26.215082] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.616 Malloc0 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.616 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.617 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.617 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.617 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.617 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.617 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.617 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.617 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.617 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.617 [2024-07-24 19:48:26.281111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.876 test case1: single bdev can't be used in multiple subsystems 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.876 [2024-07-24 19:48:26.304964] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:57.876 [2024-07-24 19:48:26.305210] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:57.876 [2024-07-24 19:48:26.305300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.876 request: 00:09:57.876 { 00:09:57.876 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:57.876 "namespace": { 00:09:57.876 "bdev_name": "Malloc0", 00:09:57.876 "no_auto_visible": false 00:09:57.876 }, 00:09:57.876 "method": "nvmf_subsystem_add_ns", 00:09:57.876 "req_id": 1 00:09:57.876 } 00:09:57.876 Got JSON-RPC error response 00:09:57.876 response: 00:09:57.876 { 00:09:57.876 "code": -32602, 00:09:57.876 "message": "Invalid parameters" 00:09:57.876 } 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:57.876 Adding namespace failed - expected result. 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:57.876 test case2: host connect to nvmf target in multiple paths 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.876 [2024-07-24 19:48:26.317111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid=69cdc0e8-4c23-4318-834b-1d87efff05de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.876 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid=69cdc0e8-4c23-4318-834b-1d87efff05de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:58.135 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.135 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:58.135 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.135 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:58.135 19:48:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:00.125 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:00.125 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:00.125 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.125 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:00.125 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.125 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:00.125 19:48:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:00.125 [global] 00:10:00.125 thread=1 00:10:00.125 invalidate=1 00:10:00.125 rw=write 00:10:00.125 time_based=1 00:10:00.125 runtime=1 00:10:00.125 ioengine=libaio 00:10:00.125 direct=1 00:10:00.125 bs=4096 00:10:00.125 iodepth=1 00:10:00.125 norandommap=0 00:10:00.125 numjobs=1 00:10:00.125 00:10:00.125 verify_dump=1 00:10:00.125 verify_backlog=512 00:10:00.125 verify_state_save=0 00:10:00.125 do_verify=1 00:10:00.125 verify=crc32c-intel 00:10:00.125 [job0] 00:10:00.125 filename=/dev/nvme0n1 00:10:00.125 Could not set queue depth (nvme0n1) 00:10:00.125 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.125 fio-3.35 00:10:00.125 Starting 1 thread 00:10:01.504 00:10:01.504 job0: (groupid=0, jobs=1): err= 0: pid=67298: Wed Jul 24 19:48:29 2024 00:10:01.504 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:01.504 slat (nsec): min=13442, max=59342, avg=15462.52, stdev=3088.56 00:10:01.504 clat (usec): min=138, max=446, avg=170.48, stdev=15.36 00:10:01.504 lat (usec): min=152, max=462, avg=185.95, stdev=15.92 00:10:01.504 clat percentiles (usec): 00:10:01.504 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:10:01.504 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:01.504 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 194], 00:10:01.504 | 99.00th=[ 210], 99.50th=[ 227], 99.90th=[ 249], 99.95th=[ 326], 00:10:01.504 | 99.99th=[ 449] 00:10:01.504 write: IOPS=3250, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec); 0 zone resets 00:10:01.504 slat (nsec): min=15918, max=98983, avg=22559.54, stdev=3903.04 00:10:01.504 clat (usec): min=86, max=220, avg=105.60, stdev=10.41 00:10:01.504 lat (usec): min=106, max=319, avg=128.16, stdev=11.74 00:10:01.504 clat percentiles (usec): 00:10:01.504 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 98], 00:10:01.504 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 104], 60.00th=[ 106], 00:10:01.504 | 70.00th=[ 110], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 125], 00:10:01.504 | 99.00th=[ 137], 99.50th=[ 141], 99.90th=[ 159], 99.95th=[ 169], 00:10:01.504 | 99.99th=[ 221] 00:10:01.504 bw ( KiB/s): min=12800, max=12800, per=98.44%, avg=12800.00, stdev= 0.00, samples=1 00:10:01.504 iops : min= 3200, max= 3200, avg=3200.00, stdev= 0.00, samples=1 00:10:01.504 lat (usec) : 100=15.95%, 250=84.02%, 500=0.03% 00:10:01.504 cpu : usr=2.30%, sys=9.50%, ctx=6326, majf=0, minf=2 00:10:01.504 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.504 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.504 issued rwts: total=3072,3254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.504 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.504 00:10:01.504 Run status group 0 (all jobs): 00:10:01.504 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:01.504 WRITE: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:10:01.504 00:10:01.504 Disk stats (read/write): 00:10:01.504 nvme0n1: ios=2708/3072, merge=0/0, ticks=472/356, in_queue=828, util=91.48% 00:10:01.504 19:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.504 rmmod nvme_tcp 00:10:01.504 rmmod nvme_fabrics 00:10:01.504 rmmod nvme_keyring 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 67207 ']' 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 67207 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 67207 ']' 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 67207 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.504 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67207 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.764 killing process with pid 67207 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67207' 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 67207 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 67207 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.764 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:02.024 00:10:02.024 real 0m5.837s 00:10:02.024 user 0m18.713s 00:10:02.024 sys 0m2.279s 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.024 ************************************ 00:10:02.024 END TEST nvmf_nmic 00:10:02.024 ************************************ 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:02.024 ************************************ 00:10:02.024 START TEST nvmf_fio_target 00:10:02.024 ************************************ 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:02.024 * Looking for test storage... 00:10:02.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:02.024 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:02.025 Cannot find device "nvmf_tgt_br" 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:02.025 Cannot find device "nvmf_tgt_br2" 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:02.025 Cannot find device "nvmf_tgt_br" 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:02.025 Cannot find device "nvmf_tgt_br2" 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:02.025 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:02.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:02.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:02.284 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:02.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:10:02.285 00:10:02.285 --- 10.0.0.2 ping statistics --- 00:10:02.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.285 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:02.285 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:02.285 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:10:02.285 00:10:02.285 --- 10.0.0.3 ping statistics --- 00:10:02.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.285 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:02.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:10:02.285 00:10:02.285 --- 10.0.0.1 ping statistics --- 00:10:02.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.285 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=67475 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 67475 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 67475 ']' 00:10:02.285 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.544 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.544 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.544 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.544 19:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.544 [2024-07-24 19:48:31.006879] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:10:02.544 [2024-07-24 19:48:31.006989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.544 [2024-07-24 19:48:31.147069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.802 [2024-07-24 19:48:31.263304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.802 [2024-07-24 19:48:31.263367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.802 [2024-07-24 19:48:31.263379] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.802 [2024-07-24 19:48:31.263388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.802 [2024-07-24 19:48:31.263396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.802 [2024-07-24 19:48:31.263540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.802 [2024-07-24 19:48:31.264280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.803 [2024-07-24 19:48:31.264454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.803 [2024-07-24 19:48:31.264507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.803 [2024-07-24 19:48:31.317942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:03.369 19:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.369 19:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:03.369 19:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:03.369 19:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.369 19:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.369 19:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.369 19:48:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:03.627 [2024-07-24 19:48:32.225893] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.627 19:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.885 19:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:03.885 19:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.143 19:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:04.143 19:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.401 19:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:04.401 19:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.659 19:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:04.659 19:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:04.915 19:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.172 19:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:05.173 19:48:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.430 19:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:05.430 19:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.688 19:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:05.688 19:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:05.947 19:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:06.204 19:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:06.204 19:48:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.463 19:48:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:06.463 19:48:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:06.720 19:48:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.978 [2024-07-24 19:48:35.588766] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.978 19:48:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:07.236 19:48:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:07.493 19:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid=69cdc0e8-4c23-4318-834b-1d87efff05de -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.751 19:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:07.751 19:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:07.751 19:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.751 19:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:07.751 19:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:07.751 19:48:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:09.658 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:09.658 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:09.658 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.658 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:09.658 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.658 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:09.658 19:48:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:09.658 [global] 00:10:09.658 thread=1 00:10:09.658 invalidate=1 00:10:09.658 rw=write 00:10:09.658 time_based=1 00:10:09.658 runtime=1 00:10:09.658 ioengine=libaio 00:10:09.658 direct=1 00:10:09.658 bs=4096 00:10:09.658 iodepth=1 00:10:09.658 norandommap=0 00:10:09.658 numjobs=1 00:10:09.658 00:10:09.658 verify_dump=1 00:10:09.658 verify_backlog=512 00:10:09.658 verify_state_save=0 00:10:09.658 do_verify=1 00:10:09.658 verify=crc32c-intel 00:10:09.658 [job0] 00:10:09.658 filename=/dev/nvme0n1 00:10:09.659 [job1] 00:10:09.659 filename=/dev/nvme0n2 00:10:09.659 [job2] 00:10:09.659 filename=/dev/nvme0n3 00:10:09.659 [job3] 00:10:09.659 filename=/dev/nvme0n4 00:10:09.917 Could not set queue depth (nvme0n1) 00:10:09.917 Could not set queue depth (nvme0n2) 00:10:09.917 Could not set queue depth (nvme0n3) 00:10:09.917 Could not set queue depth (nvme0n4) 00:10:09.917 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.917 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.917 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.917 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.917 fio-3.35 00:10:09.917 Starting 4 threads 00:10:11.293 00:10:11.293 job0: (groupid=0, jobs=1): err= 0: pid=67660: Wed Jul 24 19:48:39 2024 00:10:11.293 read: IOPS=2683, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:10:11.293 slat (nsec): min=12118, max=51340, avg=15213.75, stdev=2754.11 00:10:11.293 clat (usec): min=126, max=2062, avg=179.25, stdev=48.29 00:10:11.293 lat (usec): min=151, max=2078, avg=194.46, stdev=48.37 00:10:11.293 clat percentiles (usec): 00:10:11.293 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:10:11.293 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:11.293 | 70.00th=[ 180], 80.00th=[ 204], 90.00th=[ 225], 95.00th=[ 239], 00:10:11.293 | 99.00th=[ 273], 99.50th=[ 338], 99.90th=[ 441], 99.95th=[ 449], 00:10:11.293 | 99.99th=[ 2057] 00:10:11.293 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:11.293 slat (usec): min=14, max=221, avg=22.08, stdev= 5.37 00:10:11.293 clat (usec): min=90, max=410, avg=129.83, stdev=18.68 00:10:11.293 lat (usec): min=111, max=459, avg=151.92, stdev=19.25 00:10:11.293 clat percentiles (usec): 00:10:11.293 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 111], 20.00th=[ 116], 00:10:11.293 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 133], 00:10:11.293 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 153], 95.00th=[ 161], 00:10:11.293 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 249], 99.95th=[ 334], 00:10:11.293 | 99.99th=[ 412] 00:10:11.293 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:11.293 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:11.293 lat (usec) : 100=0.61%, 250=98.21%, 500=1.16% 00:10:11.293 lat (msec) : 4=0.02% 00:10:11.293 cpu : usr=3.10%, sys=7.70%, ctx=5760, majf=0, minf=12 00:10:11.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.293 issued rwts: total=2686,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.293 job1: (groupid=0, jobs=1): err= 0: pid=67661: Wed Jul 24 19:48:39 2024 00:10:11.293 read: IOPS=1814, BW=7257KiB/s (7431kB/s)(7264KiB/1001msec) 00:10:11.293 slat (nsec): min=12323, max=38742, avg=14428.56, stdev=2221.40 00:10:11.293 clat (usec): min=150, max=6834, avg=292.60, stdev=255.66 00:10:11.293 lat (usec): min=163, max=6848, avg=307.03, stdev=255.90 00:10:11.293 clat percentiles (usec): 00:10:11.293 | 1.00th=[ 235], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 265], 00:10:11.293 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:10:11.293 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 330], 00:10:11.293 | 99.00th=[ 396], 99.50th=[ 490], 99.90th=[ 5800], 99.95th=[ 6849], 00:10:11.293 | 99.99th=[ 6849] 00:10:11.293 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:11.293 slat (usec): min=17, max=125, avg=21.45, stdev= 7.94 00:10:11.293 clat (usec): min=88, max=2202, avg=191.22, stdev=58.85 00:10:11.293 lat (usec): min=107, max=2233, avg=212.67, stdev=59.92 00:10:11.293 clat percentiles (usec): 00:10:11.293 | 1.00th=[ 98], 5.00th=[ 106], 10.00th=[ 120], 20.00th=[ 184], 00:10:11.293 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 200], 00:10:11.293 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:10:11.293 | 99.00th=[ 249], 99.50th=[ 306], 99.90th=[ 676], 99.95th=[ 791], 00:10:11.293 | 99.99th=[ 2212] 00:10:11.293 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:11.293 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:11.293 lat (usec) : 100=1.01%, 250=53.80%, 500=44.85%, 750=0.13%, 1000=0.03% 00:10:11.293 lat (msec) : 4=0.10%, 10=0.08% 00:10:11.293 cpu : usr=1.70%, sys=5.30%, ctx=3865, majf=0, minf=9 00:10:11.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.294 issued rwts: total=1816,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.294 job2: (groupid=0, jobs=1): err= 0: pid=67662: Wed Jul 24 19:48:39 2024 00:10:11.294 read: IOPS=1813, BW=7253KiB/s (7427kB/s)(7260KiB/1001msec) 00:10:11.294 slat (usec): min=12, max=125, avg=16.71, stdev= 3.82 00:10:11.294 clat (usec): min=152, max=434, avg=268.06, stdev=27.36 00:10:11.294 lat (usec): min=171, max=452, avg=284.77, stdev=27.13 00:10:11.294 clat percentiles (usec): 00:10:11.294 | 1.00th=[ 167], 5.00th=[ 204], 10.00th=[ 249], 20.00th=[ 258], 00:10:11.294 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:10:11.294 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:10:11.294 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 416], 99.95th=[ 437], 00:10:11.294 | 99.99th=[ 437] 00:10:11.294 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:11.294 slat (usec): min=19, max=224, avg=25.77, stdev= 9.13 00:10:11.294 clat (usec): min=109, max=2565, avg=206.18, stdev=73.23 00:10:11.294 lat (usec): min=134, max=2601, avg=231.94, stdev=75.55 00:10:11.294 clat percentiles (usec): 00:10:11.294 | 1.00th=[ 122], 5.00th=[ 133], 10.00th=[ 176], 20.00th=[ 184], 00:10:11.294 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:10:11.294 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 262], 95.00th=[ 314], 00:10:11.294 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 478], 99.95th=[ 1336], 00:10:11.294 | 99.99th=[ 2573] 00:10:11.294 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:11.294 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:11.294 lat (usec) : 250=52.24%, 500=47.71% 00:10:11.294 lat (msec) : 2=0.03%, 4=0.03% 00:10:11.294 cpu : usr=1.20%, sys=6.90%, ctx=3873, majf=0, minf=7 00:10:11.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.294 issued rwts: total=1815,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.294 job3: (groupid=0, jobs=1): err= 0: pid=67663: Wed Jul 24 19:48:39 2024 00:10:11.294 read: IOPS=2850, BW=11.1MiB/s (11.7MB/s)(11.1MiB/1001msec) 00:10:11.294 slat (nsec): min=12302, max=48572, avg=14539.50, stdev=1970.29 00:10:11.294 clat (usec): min=145, max=2048, avg=171.94, stdev=39.23 00:10:11.294 lat (usec): min=160, max=2064, avg=186.48, stdev=39.31 00:10:11.294 clat percentiles (usec): 00:10:11.294 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:11.294 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:10:11.294 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 192], 00:10:11.294 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 570], 99.95th=[ 594], 00:10:11.294 | 99.99th=[ 2057] 00:10:11.294 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:11.294 slat (nsec): min=14559, max=90801, avg=20939.25, stdev=3580.50 00:10:11.294 clat (usec): min=100, max=250, avg=127.98, stdev=11.13 00:10:11.294 lat (usec): min=120, max=341, avg=148.92, stdev=11.89 00:10:11.294 clat percentiles (usec): 00:10:11.294 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 120], 00:10:11.294 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 130], 00:10:11.294 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 147], 00:10:11.294 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 172], 99.95th=[ 210], 00:10:11.294 | 99.99th=[ 251] 00:10:11.294 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:11.294 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:11.294 lat (usec) : 250=99.88%, 500=0.05%, 750=0.05% 00:10:11.294 lat (msec) : 4=0.02% 00:10:11.294 cpu : usr=2.30%, sys=8.10%, ctx=5926, majf=0, minf=7 00:10:11.294 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:11.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.294 issued rwts: total=2853,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.294 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:11.294 00:10:11.294 Run status group 0 (all jobs): 00:10:11.294 READ: bw=35.8MiB/s (37.5MB/s), 7253KiB/s-11.1MiB/s (7427kB/s-11.7MB/s), io=35.8MiB (37.6MB), run=1001-1001msec 00:10:11.294 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:11.294 00:10:11.294 Disk stats (read/write): 00:10:11.294 nvme0n1: ios=2469/2560, merge=0/0, ticks=466/342, in_queue=808, util=87.58% 00:10:11.294 nvme0n2: ios=1551/1760, merge=0/0, ticks=449/342, in_queue=791, util=86.95% 00:10:11.294 nvme0n3: ios=1536/1761, merge=0/0, ticks=417/384, in_queue=801, util=89.18% 00:10:11.294 nvme0n4: ios=2504/2560, merge=0/0, ticks=440/342, in_queue=782, util=89.64% 00:10:11.294 19:48:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:11.294 [global] 00:10:11.294 thread=1 00:10:11.294 invalidate=1 00:10:11.294 rw=randwrite 00:10:11.294 time_based=1 00:10:11.294 runtime=1 00:10:11.294 ioengine=libaio 00:10:11.294 direct=1 00:10:11.294 bs=4096 00:10:11.294 iodepth=1 00:10:11.294 norandommap=0 00:10:11.294 numjobs=1 00:10:11.294 00:10:11.294 verify_dump=1 00:10:11.294 verify_backlog=512 00:10:11.294 verify_state_save=0 00:10:11.294 do_verify=1 00:10:11.294 verify=crc32c-intel 00:10:11.294 [job0] 00:10:11.294 filename=/dev/nvme0n1 00:10:11.294 [job1] 00:10:11.294 filename=/dev/nvme0n2 00:10:11.294 [job2] 00:10:11.294 filename=/dev/nvme0n3 00:10:11.294 [job3] 00:10:11.294 filename=/dev/nvme0n4 00:10:11.294 Could not set queue depth (nvme0n1) 00:10:11.294 Could not set queue depth (nvme0n2) 00:10:11.294 Could not set queue depth (nvme0n3) 00:10:11.294 Could not set queue depth (nvme0n4) 00:10:11.294 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.294 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.294 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.294 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:11.294 fio-3.35 00:10:11.294 Starting 4 threads 00:10:12.677 00:10:12.677 job0: (groupid=0, jobs=1): err= 0: pid=67720: Wed Jul 24 19:48:40 2024 00:10:12.677 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:12.677 slat (nsec): min=15564, max=75822, avg=28622.40, stdev=10460.45 00:10:12.677 clat (usec): min=229, max=808, avg=426.69, stdev=112.78 00:10:12.677 lat (usec): min=259, max=862, avg=455.31, stdev=120.46 00:10:12.677 clat percentiles (usec): 00:10:12.677 | 1.00th=[ 318], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 363], 00:10:12.677 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 388], 00:10:12.677 | 70.00th=[ 400], 80.00th=[ 482], 90.00th=[ 668], 95.00th=[ 717], 00:10:12.677 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 791], 99.95th=[ 807], 00:10:12.677 | 99.99th=[ 807] 00:10:12.677 write: IOPS=1515, BW=6062KiB/s (6207kB/s)(6068KiB/1001msec); 0 zone resets 00:10:12.677 slat (usec): min=19, max=162, avg=38.84, stdev=12.41 00:10:12.677 clat (usec): min=48, max=2080, avg=307.33, stdev=140.70 00:10:12.677 lat (usec): min=139, max=2118, avg=346.18, stdev=148.03 00:10:12.677 clat percentiles (usec): 00:10:12.677 | 1.00th=[ 122], 5.00th=[ 133], 10.00th=[ 141], 20.00th=[ 204], 00:10:12.677 | 30.00th=[ 227], 40.00th=[ 260], 50.00th=[ 281], 60.00th=[ 297], 00:10:12.677 | 70.00th=[ 375], 80.00th=[ 445], 90.00th=[ 478], 95.00th=[ 506], 00:10:12.677 | 99.00th=[ 652], 99.50th=[ 725], 99.90th=[ 2057], 99.95th=[ 2073], 00:10:12.677 | 99.99th=[ 2073] 00:10:12.677 bw ( KiB/s): min= 5485, max= 5485, per=17.92%, avg=5485.00, stdev= 0.00, samples=1 00:10:12.677 iops : min= 1371, max= 1371, avg=1371.00, stdev= 0.00, samples=1 00:10:12.677 lat (usec) : 50=0.04%, 250=23.14%, 500=65.64%, 750=10.67%, 1000=0.43% 00:10:12.677 lat (msec) : 4=0.08% 00:10:12.677 cpu : usr=1.80%, sys=6.90%, ctx=2544, majf=0, minf=17 00:10:12.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.677 issued rwts: total=1024,1517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.677 job1: (groupid=0, jobs=1): err= 0: pid=67721: Wed Jul 24 19:48:40 2024 00:10:12.677 read: IOPS=1242, BW=4971KiB/s (5090kB/s)(4976KiB/1001msec) 00:10:12.677 slat (nsec): min=9977, max=67798, avg=18556.66, stdev=5729.55 00:10:12.677 clat (usec): min=246, max=888, avg=410.25, stdev=71.72 00:10:12.677 lat (usec): min=258, max=899, avg=428.80, stdev=72.66 00:10:12.677 clat percentiles (usec): 00:10:12.677 | 1.00th=[ 281], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 367], 00:10:12.677 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 396], 00:10:12.677 | 70.00th=[ 412], 80.00th=[ 457], 90.00th=[ 502], 95.00th=[ 578], 00:10:12.677 | 99.00th=[ 627], 99.50th=[ 693], 99.90th=[ 816], 99.95th=[ 889], 00:10:12.677 | 99.99th=[ 889] 00:10:12.677 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:12.677 slat (usec): min=12, max=116, avg=22.31, stdev= 5.72 00:10:12.677 clat (usec): min=167, max=619, avg=277.61, stdev=68.98 00:10:12.677 lat (usec): min=190, max=651, avg=299.91, stdev=70.96 00:10:12.677 clat percentiles (usec): 00:10:12.677 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 221], 00:10:12.677 | 30.00th=[ 239], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 281], 00:10:12.677 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 343], 95.00th=[ 445], 00:10:12.677 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 611], 99.95th=[ 619], 00:10:12.677 | 99.99th=[ 619] 00:10:12.677 bw ( KiB/s): min= 7273, max= 7273, per=23.76%, avg=7273.00, stdev= 0.00, samples=1 00:10:12.677 iops : min= 1818, max= 1818, avg=1818.00, stdev= 0.00, samples=1 00:10:12.678 lat (usec) : 250=19.03%, 500=75.97%, 750=4.89%, 1000=0.11% 00:10:12.678 cpu : usr=1.30%, sys=4.90%, ctx=2781, majf=0, minf=9 00:10:12.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.678 issued rwts: total=1244,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.678 job2: (groupid=0, jobs=1): err= 0: pid=67722: Wed Jul 24 19:48:40 2024 00:10:12.678 read: IOPS=3040, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec) 00:10:12.678 slat (nsec): min=12490, max=39027, avg=14384.81, stdev=2174.53 00:10:12.678 clat (usec): min=138, max=1895, avg=167.39, stdev=36.19 00:10:12.678 lat (usec): min=151, max=1908, avg=181.78, stdev=36.33 00:10:12.678 clat percentiles (usec): 00:10:12.678 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:10:12.678 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:10:12.678 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:10:12.678 | 99.00th=[ 202], 99.50th=[ 210], 99.90th=[ 379], 99.95th=[ 668], 00:10:12.678 | 99.99th=[ 1893] 00:10:12.678 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:12.678 slat (nsec): min=14763, max=96307, avg=21221.00, stdev=3981.29 00:10:12.678 clat (usec): min=94, max=228, avg=120.52, stdev=12.57 00:10:12.678 lat (usec): min=115, max=325, avg=141.74, stdev=13.63 00:10:12.678 clat percentiles (usec): 00:10:12.678 | 1.00th=[ 98], 5.00th=[ 102], 10.00th=[ 104], 20.00th=[ 110], 00:10:12.678 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 124], 00:10:12.678 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:10:12.678 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 172], 00:10:12.678 | 99.99th=[ 229] 00:10:12.678 bw ( KiB/s): min=12327, max=12327, per=40.27%, avg=12327.00, stdev= 0.00, samples=1 00:10:12.678 iops : min= 3081, max= 3081, avg=3081.00, stdev= 0.00, samples=1 00:10:12.678 lat (usec) : 100=1.36%, 250=98.46%, 500=0.15%, 750=0.02% 00:10:12.678 lat (msec) : 2=0.02% 00:10:12.678 cpu : usr=2.10%, sys=9.00%, ctx=6127, majf=0, minf=16 00:10:12.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.678 issued rwts: total=3044,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.678 job3: (groupid=0, jobs=1): err= 0: pid=67723: Wed Jul 24 19:48:40 2024 00:10:12.678 read: IOPS=1240, BW=4963KiB/s (5082kB/s)(4968KiB/1001msec) 00:10:12.678 slat (nsec): min=9946, max=48211, avg=18820.29, stdev=5185.93 00:10:12.678 clat (usec): min=244, max=876, avg=410.35, stdev=73.77 00:10:12.678 lat (usec): min=260, max=895, avg=429.17, stdev=73.59 00:10:12.678 clat percentiles (usec): 00:10:12.678 | 1.00th=[ 289], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 367], 00:10:12.678 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 392], 00:10:12.678 | 70.00th=[ 408], 80.00th=[ 457], 90.00th=[ 498], 95.00th=[ 586], 00:10:12.678 | 99.00th=[ 652], 99.50th=[ 685], 99.90th=[ 832], 99.95th=[ 881], 00:10:12.678 | 99.99th=[ 881] 00:10:12.678 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:12.678 slat (nsec): min=15797, max=94258, avg=27512.86, stdev=8677.59 00:10:12.678 clat (usec): min=176, max=583, avg=272.14, stdev=64.35 00:10:12.678 lat (usec): min=199, max=648, avg=299.65, stdev=70.03 00:10:12.678 clat percentiles (usec): 00:10:12.678 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 219], 00:10:12.678 | 30.00th=[ 235], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 277], 00:10:12.678 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 334], 95.00th=[ 429], 00:10:12.678 | 99.00th=[ 478], 99.50th=[ 515], 99.90th=[ 562], 99.95th=[ 586], 00:10:12.678 | 99.99th=[ 586] 00:10:12.678 bw ( KiB/s): min= 7272, max= 7272, per=23.75%, avg=7272.00, stdev= 0.00, samples=1 00:10:12.678 iops : min= 1818, max= 1818, avg=1818.00, stdev= 0.00, samples=1 00:10:12.678 lat (usec) : 250=19.94%, 500=75.27%, 750=4.64%, 1000=0.14% 00:10:12.678 cpu : usr=1.50%, sys=5.50%, ctx=2778, majf=0, minf=5 00:10:12.678 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.678 issued rwts: total=1242,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.678 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.678 00:10:12.678 Run status group 0 (all jobs): 00:10:12.678 READ: bw=25.6MiB/s (26.8MB/s), 4092KiB/s-11.9MiB/s (4190kB/s-12.5MB/s), io=25.6MiB (26.8MB), run=1001-1001msec 00:10:12.678 WRITE: bw=29.9MiB/s (31.3MB/s), 6062KiB/s-12.0MiB/s (6207kB/s-12.6MB/s), io=29.9MiB (31.4MB), run=1001-1001msec 00:10:12.678 00:10:12.678 Disk stats (read/write): 00:10:12.678 nvme0n1: ios=1073/1086, merge=0/0, ticks=464/376, in_queue=840, util=88.44% 00:10:12.678 nvme0n2: ios=1068/1440, merge=0/0, ticks=427/363, in_queue=790, util=89.55% 00:10:12.678 nvme0n3: ios=2566/2803, merge=0/0, ticks=444/375, in_queue=819, util=89.61% 00:10:12.678 nvme0n4: ios=1024/1438, merge=0/0, ticks=406/399, in_queue=805, util=89.77% 00:10:12.678 19:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:12.678 [global] 00:10:12.678 thread=1 00:10:12.678 invalidate=1 00:10:12.678 rw=write 00:10:12.678 time_based=1 00:10:12.678 runtime=1 00:10:12.678 ioengine=libaio 00:10:12.678 direct=1 00:10:12.678 bs=4096 00:10:12.678 iodepth=128 00:10:12.678 norandommap=0 00:10:12.678 numjobs=1 00:10:12.678 00:10:12.678 verify_dump=1 00:10:12.678 verify_backlog=512 00:10:12.678 verify_state_save=0 00:10:12.678 do_verify=1 00:10:12.678 verify=crc32c-intel 00:10:12.678 [job0] 00:10:12.678 filename=/dev/nvme0n1 00:10:12.678 [job1] 00:10:12.678 filename=/dev/nvme0n2 00:10:12.678 [job2] 00:10:12.678 filename=/dev/nvme0n3 00:10:12.678 [job3] 00:10:12.678 filename=/dev/nvme0n4 00:10:12.678 Could not set queue depth (nvme0n1) 00:10:12.678 Could not set queue depth (nvme0n2) 00:10:12.678 Could not set queue depth (nvme0n3) 00:10:12.678 Could not set queue depth (nvme0n4) 00:10:12.678 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.678 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.678 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.678 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.678 fio-3.35 00:10:12.678 Starting 4 threads 00:10:14.056 00:10:14.056 job0: (groupid=0, jobs=1): err= 0: pid=67785: Wed Jul 24 19:48:42 2024 00:10:14.056 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:10:14.056 slat (usec): min=9, max=3339, avg=90.36, stdev=419.79 00:10:14.056 clat (usec): min=9143, max=13276, avg=12178.02, stdev=533.42 00:10:14.056 lat (usec): min=11439, max=13300, avg=12268.37, stdev=340.13 00:10:14.056 clat percentiles (usec): 00:10:14.056 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[11863], 20.00th=[11863], 00:10:14.056 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:10:14.056 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[12911], 00:10:14.056 | 99.00th=[13042], 99.50th=[13042], 99.90th=[13173], 99.95th=[13304], 00:10:14.056 | 99.99th=[13304] 00:10:14.056 write: IOPS=5569, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1001msec); 0 zone resets 00:10:14.056 slat (usec): min=12, max=2500, avg=88.16, stdev=360.12 00:10:14.056 clat (usec): min=582, max=12668, avg=11511.24, stdev=982.36 00:10:14.056 lat (usec): min=607, max=12689, avg=11599.40, stdev=912.77 00:10:14.056 clat percentiles (usec): 00:10:14.056 | 1.00th=[ 6390], 5.00th=[10945], 10.00th=[11207], 20.00th=[11338], 00:10:14.056 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:10:14.056 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12125], 95.00th=[12256], 00:10:14.056 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12518], 99.95th=[12649], 00:10:14.056 | 99.99th=[12649] 00:10:14.056 bw ( KiB/s): min=21676, max=21676, per=26.80%, avg=21676.00, stdev= 0.00, samples=1 00:10:14.056 iops : min= 5419, max= 5419, avg=5419.00, stdev= 0.00, samples=1 00:10:14.056 lat (usec) : 750=0.06%, 1000=0.01% 00:10:14.056 lat (msec) : 4=0.30%, 10=2.94%, 20=96.70% 00:10:14.056 cpu : usr=5.50%, sys=15.00%, ctx=335, majf=0, minf=8 00:10:14.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:14.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.056 issued rwts: total=5120,5575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.056 job1: (groupid=0, jobs=1): err= 0: pid=67786: Wed Jul 24 19:48:42 2024 00:10:14.056 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:10:14.056 slat (usec): min=5, max=2942, avg=90.27, stdev=412.85 00:10:14.056 clat (usec): min=9031, max=15429, avg=12254.65, stdev=590.44 00:10:14.056 lat (usec): min=11346, max=15443, avg=12344.92, stdev=425.28 00:10:14.056 clat percentiles (usec): 00:10:14.056 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[11863], 20.00th=[11994], 00:10:14.056 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:10:14.056 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12911], 95.00th=[13042], 00:10:14.056 | 99.00th=[13304], 99.50th=[15139], 99.90th=[15401], 99.95th=[15401], 00:10:14.056 | 99.99th=[15401] 00:10:14.056 write: IOPS=5467, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1001msec); 0 zone resets 00:10:14.056 slat (usec): min=9, max=4979, avg=89.93, stdev=364.35 00:10:14.056 clat (usec): min=248, max=14764, avg=11653.39, stdev=1108.06 00:10:14.056 lat (usec): min=2102, max=15450, avg=11743.32, stdev=1049.62 00:10:14.056 clat percentiles (usec): 00:10:14.056 | 1.00th=[ 5407], 5.00th=[10421], 10.00th=[11338], 20.00th=[11469], 00:10:14.056 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:10:14.056 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12256], 95.00th=[12518], 00:10:14.056 | 99.00th=[14615], 99.50th=[14615], 99.90th=[14746], 99.95th=[14746], 00:10:14.056 | 99.99th=[14746] 00:10:14.056 bw ( KiB/s): min=21512, max=21512, per=26.59%, avg=21512.00, stdev= 0.00, samples=1 00:10:14.056 iops : min= 5378, max= 5378, avg=5378.00, stdev= 0.00, samples=1 00:10:14.056 lat (usec) : 250=0.01% 00:10:14.056 lat (msec) : 4=0.30%, 10=2.69%, 20=97.00% 00:10:14.056 cpu : usr=5.60%, sys=15.60%, ctx=335, majf=0, minf=5 00:10:14.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:14.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.056 issued rwts: total=5120,5473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.056 job2: (groupid=0, jobs=1): err= 0: pid=67787: Wed Jul 24 19:48:42 2024 00:10:14.056 read: IOPS=4216, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1002msec) 00:10:14.056 slat (usec): min=5, max=3797, avg=111.00, stdev=523.72 00:10:14.056 clat (usec): min=450, max=16132, avg=14589.37, stdev=1284.89 00:10:14.056 lat (usec): min=3784, max=16147, avg=14700.37, stdev=1175.45 00:10:14.056 clat percentiles (usec): 00:10:14.056 | 1.00th=[ 8586], 5.00th=[12387], 10.00th=[13960], 20.00th=[14353], 00:10:14.056 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:10:14.056 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15270], 95.00th=[15401], 00:10:14.056 | 99.00th=[15795], 99.50th=[15795], 99.90th=[16057], 99.95th=[16057], 00:10:14.056 | 99.99th=[16188] 00:10:14.056 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:14.056 slat (usec): min=11, max=3570, avg=106.64, stdev=450.79 00:10:14.056 clat (usec): min=10637, max=15525, avg=14052.83, stdev=657.96 00:10:14.056 lat (usec): min=11883, max=15552, avg=14159.46, stdev=475.53 00:10:14.056 clat percentiles (usec): 00:10:14.056 | 1.00th=[11338], 5.00th=[13173], 10.00th=[13435], 20.00th=[13698], 00:10:14.056 | 30.00th=[13829], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:10:14.056 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14746], 95.00th=[15008], 00:10:14.056 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15533], 99.95th=[15533], 00:10:14.056 | 99.99th=[15533] 00:10:14.056 bw ( KiB/s): min=18424, max=18440, per=22.79%, avg=18432.00, stdev=11.31, samples=2 00:10:14.056 iops : min= 4606, max= 4610, avg=4608.00, stdev= 2.83, samples=2 00:10:14.056 lat (usec) : 500=0.01% 00:10:14.056 lat (msec) : 4=0.11%, 10=0.61%, 20=99.26% 00:10:14.056 cpu : usr=4.70%, sys=13.79%, ctx=278, majf=0, minf=13 00:10:14.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:14.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.056 issued rwts: total=4225,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.056 job3: (groupid=0, jobs=1): err= 0: pid=67788: Wed Jul 24 19:48:42 2024 00:10:14.056 read: IOPS=4567, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1002msec) 00:10:14.056 slat (usec): min=7, max=3347, avg=106.83, stdev=504.29 00:10:14.056 clat (usec): min=239, max=15364, avg=14032.87, stdev=1274.98 00:10:14.056 lat (usec): min=3482, max=15380, avg=14139.69, stdev=1173.28 00:10:14.056 clat percentiles (usec): 00:10:14.056 | 1.00th=[ 7308], 5.00th=[11994], 10.00th=[13304], 20.00th=[13829], 00:10:14.056 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:10:14.056 | 70.00th=[14484], 80.00th=[14615], 90.00th=[14877], 95.00th=[15008], 00:10:14.056 | 99.00th=[15270], 99.50th=[15270], 99.90th=[15401], 99.95th=[15401], 00:10:14.056 | 99.99th=[15401] 00:10:14.056 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:10:14.056 slat (usec): min=9, max=3109, avg=102.66, stdev=441.82 00:10:14.056 clat (usec): min=10146, max=14673, avg=13505.08, stdev=623.33 00:10:14.056 lat (usec): min=11019, max=14776, avg=13607.74, stdev=437.49 00:10:14.056 clat percentiles (usec): 00:10:14.056 | 1.00th=[10814], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:10:14.056 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:10:14.056 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14353], 00:10:14.056 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14615], 99.95th=[14615], 00:10:14.056 | 99.99th=[14615] 00:10:14.056 bw ( KiB/s): min=17288, max=19576, per=22.79%, avg=18432.00, stdev=1617.86, samples=2 00:10:14.056 iops : min= 4322, max= 4894, avg=4608.00, stdev=404.47, samples=2 00:10:14.056 lat (usec) : 250=0.01% 00:10:14.056 lat (msec) : 4=0.28%, 10=0.41%, 20=99.29% 00:10:14.056 cpu : usr=4.60%, sys=13.19%, ctx=315, majf=0, minf=9 00:10:14.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:14.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.056 issued rwts: total=4577,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.057 00:10:14.057 Run status group 0 (all jobs): 00:10:14.057 READ: bw=74.2MiB/s (77.8MB/s), 16.5MiB/s-20.0MiB/s (17.3MB/s-20.9MB/s), io=74.4MiB (78.0MB), run=1001-1002msec 00:10:14.057 WRITE: bw=79.0MiB/s (82.8MB/s), 18.0MiB/s-21.8MiB/s (18.8MB/s-22.8MB/s), io=79.2MiB (83.0MB), run=1001-1002msec 00:10:14.057 00:10:14.057 Disk stats (read/write): 00:10:14.057 nvme0n1: ios=4658/4608, merge=0/0, ticks=12515/11219, in_queue=23734, util=88.47% 00:10:14.057 nvme0n2: ios=4528/4608, merge=0/0, ticks=12321/11386, in_queue=23707, util=88.87% 00:10:14.057 nvme0n3: ios=3584/4032, merge=0/0, ticks=11874/12111, in_queue=23985, util=88.98% 00:10:14.057 nvme0n4: ios=3808/4096, merge=0/0, ticks=12295/11976, in_queue=24271, util=89.72% 00:10:14.057 19:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:14.057 [global] 00:10:14.057 thread=1 00:10:14.057 invalidate=1 00:10:14.057 rw=randwrite 00:10:14.057 time_based=1 00:10:14.057 runtime=1 00:10:14.057 ioengine=libaio 00:10:14.057 direct=1 00:10:14.057 bs=4096 00:10:14.057 iodepth=128 00:10:14.057 norandommap=0 00:10:14.057 numjobs=1 00:10:14.057 00:10:14.057 verify_dump=1 00:10:14.057 verify_backlog=512 00:10:14.057 verify_state_save=0 00:10:14.057 do_verify=1 00:10:14.057 verify=crc32c-intel 00:10:14.057 [job0] 00:10:14.057 filename=/dev/nvme0n1 00:10:14.057 [job1] 00:10:14.057 filename=/dev/nvme0n2 00:10:14.057 [job2] 00:10:14.057 filename=/dev/nvme0n3 00:10:14.057 [job3] 00:10:14.057 filename=/dev/nvme0n4 00:10:14.057 Could not set queue depth (nvme0n1) 00:10:14.057 Could not set queue depth (nvme0n2) 00:10:14.057 Could not set queue depth (nvme0n3) 00:10:14.057 Could not set queue depth (nvme0n4) 00:10:14.057 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.057 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.057 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.057 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.057 fio-3.35 00:10:14.057 Starting 4 threads 00:10:15.488 00:10:15.488 job0: (groupid=0, jobs=1): err= 0: pid=67841: Wed Jul 24 19:48:43 2024 00:10:15.488 read: IOPS=2736, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1006msec) 00:10:15.488 slat (usec): min=7, max=11500, avg=160.06, stdev=1010.85 00:10:15.488 clat (usec): min=2567, max=40711, avg=22453.93, stdev=3245.72 00:10:15.488 lat (usec): min=10533, max=45878, avg=22614.00, stdev=3221.90 00:10:15.488 clat percentiles (usec): 00:10:15.488 | 1.00th=[11076], 5.00th=[15008], 10.00th=[21365], 20.00th=[21890], 00:10:15.488 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22676], 60.00th=[22938], 00:10:15.488 | 70.00th=[23200], 80.00th=[23462], 90.00th=[24511], 95.00th=[25297], 00:10:15.488 | 99.00th=[34866], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:10:15.488 | 99.99th=[40633] 00:10:15.488 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:10:15.488 slat (usec): min=5, max=23726, avg=173.93, stdev=1171.98 00:10:15.488 clat (usec): min=10596, max=37134, avg=21387.65, stdev=3234.39 00:10:15.488 lat (usec): min=13457, max=37184, avg=21561.58, stdev=3096.66 00:10:15.488 clat percentiles (usec): 00:10:15.488 | 1.00th=[12911], 5.00th=[19006], 10.00th=[19530], 20.00th=[20055], 00:10:15.488 | 30.00th=[20317], 40.00th=[20579], 50.00th=[21103], 60.00th=[21365], 00:10:15.488 | 70.00th=[21627], 80.00th=[22152], 90.00th=[22938], 95.00th=[27657], 00:10:15.488 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:10:15.488 | 99.99th=[36963] 00:10:15.488 bw ( KiB/s): min=12263, max=12288, per=18.55%, avg=12275.50, stdev=17.68, samples=2 00:10:15.488 iops : min= 3065, max= 3072, avg=3068.50, stdev= 4.95, samples=2 00:10:15.488 lat (msec) : 4=0.02%, 20=14.97%, 50=85.01% 00:10:15.488 cpu : usr=2.59%, sys=9.75%, ctx=146, majf=0, minf=15 00:10:15.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:15.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.488 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.488 job1: (groupid=0, jobs=1): err= 0: pid=67842: Wed Jul 24 19:48:43 2024 00:10:15.488 read: IOPS=2795, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1006msec) 00:10:15.488 slat (usec): min=9, max=15342, avg=170.57, stdev=1225.08 00:10:15.488 clat (usec): min=1901, max=37145, avg=22792.73, stdev=2991.19 00:10:15.488 lat (usec): min=11672, max=43455, avg=22963.29, stdev=3121.30 00:10:15.488 clat percentiles (usec): 00:10:15.488 | 1.00th=[12387], 5.00th=[17957], 10.00th=[20841], 20.00th=[21890], 00:10:15.488 | 30.00th=[22414], 40.00th=[22414], 50.00th=[22676], 60.00th=[22938], 00:10:15.488 | 70.00th=[23200], 80.00th=[23725], 90.00th=[27132], 95.00th=[27919], 00:10:15.488 | 99.00th=[31065], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:10:15.488 | 99.99th=[36963] 00:10:15.488 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:10:15.488 slat (usec): min=6, max=16195, avg=161.25, stdev=1070.47 00:10:15.488 clat (usec): min=9312, max=28638, avg=20628.71, stdev=3104.86 00:10:15.488 lat (usec): min=9358, max=28681, avg=20789.97, stdev=2960.46 00:10:15.488 clat percentiles (usec): 00:10:15.488 | 1.00th=[ 9634], 5.00th=[14484], 10.00th=[16909], 20.00th=[19530], 00:10:15.488 | 30.00th=[20317], 40.00th=[20579], 50.00th=[21103], 60.00th=[21365], 00:10:15.488 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22676], 95.00th=[25297], 00:10:15.488 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:10:15.488 | 99.99th=[28705] 00:10:15.488 bw ( KiB/s): min=12288, max=12288, per=18.57%, avg=12288.00, stdev= 0.00, samples=2 00:10:15.488 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:15.488 lat (msec) : 2=0.02%, 10=1.00%, 20=15.55%, 50=83.43% 00:10:15.488 cpu : usr=3.48%, sys=9.25%, ctx=133, majf=0, minf=13 00:10:15.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:15.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.488 issued rwts: total=2812,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.489 job2: (groupid=0, jobs=1): err= 0: pid=67843: Wed Jul 24 19:48:43 2024 00:10:15.489 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:15.489 slat (usec): min=10, max=4732, avg=94.39, stdev=414.73 00:10:15.489 clat (usec): min=8768, max=17272, avg=12591.17, stdev=895.49 00:10:15.489 lat (usec): min=9688, max=17306, avg=12685.56, stdev=910.83 00:10:15.489 clat percentiles (usec): 00:10:15.489 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[11469], 20.00th=[12125], 00:10:15.489 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:10:15.489 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13698], 00:10:15.489 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16581], 99.95th=[17171], 00:10:15.489 | 99.99th=[17171] 00:10:15.489 write: IOPS=5283, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1003msec); 0 zone resets 00:10:15.489 slat (usec): min=12, max=4943, avg=89.15, stdev=502.14 00:10:15.489 clat (usec): min=539, max=17513, avg=11770.97, stdev=1262.64 00:10:15.489 lat (usec): min=5109, max=17563, avg=11860.12, stdev=1342.53 00:10:15.489 clat percentiles (usec): 00:10:15.489 | 1.00th=[ 6390], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11338], 00:10:15.489 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:10:15.489 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[12911], 00:10:15.489 | 99.00th=[15664], 99.50th=[16188], 99.90th=[16909], 99.95th=[17171], 00:10:15.489 | 99.99th=[17433] 00:10:15.489 bw ( KiB/s): min=20480, max=20888, per=31.26%, avg=20684.00, stdev=288.50, samples=2 00:10:15.489 iops : min= 5120, max= 5222, avg=5171.00, stdev=72.12, samples=2 00:10:15.489 lat (usec) : 750=0.01% 00:10:15.489 lat (msec) : 10=3.51%, 20=96.48% 00:10:15.489 cpu : usr=5.89%, sys=14.27%, ctx=321, majf=0, minf=13 00:10:15.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:15.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.489 issued rwts: total=5120,5299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.489 job3: (groupid=0, jobs=1): err= 0: pid=67844: Wed Jul 24 19:48:43 2024 00:10:15.489 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:15.489 slat (usec): min=6, max=6110, avg=92.04, stdev=555.56 00:10:15.489 clat (usec): min=7279, max=20407, avg=12881.20, stdev=1352.22 00:10:15.489 lat (usec): min=7295, max=24416, avg=12973.23, stdev=1356.40 00:10:15.489 clat percentiles (usec): 00:10:15.489 | 1.00th=[ 8291], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:10:15.489 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:15.489 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:10:15.489 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:10:15.489 | 99.99th=[20317] 00:10:15.489 write: IOPS=5176, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1004msec); 0 zone resets 00:10:15.489 slat (usec): min=9, max=8593, avg=93.91, stdev=547.44 00:10:15.489 clat (usec): min=914, max=16244, avg=11766.78, stdev=1433.98 00:10:15.489 lat (usec): min=5759, max=16293, avg=11860.68, stdev=1350.36 00:10:15.489 clat percentiles (usec): 00:10:15.489 | 1.00th=[ 6521], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[11076], 00:10:15.489 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:10:15.489 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12780], 95.00th=[13042], 00:10:15.489 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16188], 99.95th=[16188], 00:10:15.489 | 99.99th=[16188] 00:10:15.489 bw ( KiB/s): min=20480, max=20480, per=30.95%, avg=20480.00, stdev= 0.00, samples=2 00:10:15.489 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:15.489 lat (usec) : 1000=0.01% 00:10:15.489 lat (msec) : 10=5.06%, 20=94.78%, 50=0.16% 00:10:15.489 cpu : usr=5.08%, sys=14.06%, ctx=222, majf=0, minf=11 00:10:15.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:15.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.489 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.489 issued rwts: total=5120,5197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.489 00:10:15.489 Run status group 0 (all jobs): 00:10:15.489 READ: bw=61.4MiB/s (64.4MB/s), 10.7MiB/s-19.9MiB/s (11.2MB/s-20.9MB/s), io=61.7MiB (64.7MB), run=1003-1006msec 00:10:15.489 WRITE: bw=64.6MiB/s (67.8MB/s), 11.9MiB/s-20.6MiB/s (12.5MB/s-21.6MB/s), io=65.0MiB (68.2MB), run=1003-1006msec 00:10:15.489 00:10:15.489 Disk stats (read/write): 00:10:15.489 nvme0n1: ios=2364/2560, merge=0/0, ticks=49906/51364, in_queue=101270, util=87.46% 00:10:15.489 nvme0n2: ios=2413/2560, merge=0/0, ticks=52576/49210, in_queue=101786, util=87.82% 00:10:15.489 nvme0n3: ios=4225/4608, merge=0/0, ticks=25287/22430, in_queue=47717, util=89.07% 00:10:15.489 nvme0n4: ios=4096/4606, merge=0/0, ticks=49899/50179, in_queue=100078, util=89.63% 00:10:15.489 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:15.489 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=67857 00:10:15.489 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:15.489 19:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:15.489 [global] 00:10:15.489 thread=1 00:10:15.489 invalidate=1 00:10:15.489 rw=read 00:10:15.489 time_based=1 00:10:15.489 runtime=10 00:10:15.489 ioengine=libaio 00:10:15.489 direct=1 00:10:15.489 bs=4096 00:10:15.489 iodepth=1 00:10:15.489 norandommap=1 00:10:15.489 numjobs=1 00:10:15.489 00:10:15.489 [job0] 00:10:15.489 filename=/dev/nvme0n1 00:10:15.489 [job1] 00:10:15.489 filename=/dev/nvme0n2 00:10:15.489 [job2] 00:10:15.489 filename=/dev/nvme0n3 00:10:15.489 [job3] 00:10:15.489 filename=/dev/nvme0n4 00:10:15.489 Could not set queue depth (nvme0n1) 00:10:15.489 Could not set queue depth (nvme0n2) 00:10:15.489 Could not set queue depth (nvme0n3) 00:10:15.489 Could not set queue depth (nvme0n4) 00:10:15.489 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.489 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.489 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.489 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.489 fio-3.35 00:10:15.489 Starting 4 threads 00:10:18.775 19:48:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:18.775 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=38678528, buflen=4096 00:10:18.775 fio: pid=67900, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:18.775 19:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:18.775 fio: pid=67899, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:18.775 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=43331584, buflen=4096 00:10:18.775 19:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.775 19:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:19.033 fio: pid=67897, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:19.033 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2682880, buflen=4096 00:10:19.033 19:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.033 19:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:19.292 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7016448, buflen=4096 00:10:19.292 fio: pid=67898, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:19.292 00:10:19.292 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67897: Wed Jul 24 19:48:47 2024 00:10:19.292 read: IOPS=5060, BW=19.8MiB/s (20.7MB/s)(66.6MiB/3367msec) 00:10:19.292 slat (usec): min=12, max=11828, avg=19.50, stdev=152.65 00:10:19.292 clat (usec): min=125, max=3171, avg=176.34, stdev=47.66 00:10:19.292 lat (usec): min=143, max=12121, avg=195.84, stdev=161.26 00:10:19.292 clat percentiles (usec): 00:10:19.292 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:10:19.292 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:10:19.292 | 70.00th=[ 174], 80.00th=[ 210], 90.00th=[ 231], 95.00th=[ 241], 00:10:19.292 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 441], 99.95th=[ 758], 00:10:19.292 | 99.99th=[ 2409] 00:10:19.292 bw ( KiB/s): min=15584, max=22448, per=34.14%, avg=20813.33, stdev=2738.63, samples=6 00:10:19.292 iops : min= 3896, max= 5612, avg=5203.33, stdev=684.66, samples=6 00:10:19.292 lat (usec) : 250=97.62%, 500=2.28%, 750=0.04%, 1000=0.02% 00:10:19.292 lat (msec) : 2=0.02%, 4=0.01% 00:10:19.292 cpu : usr=1.43%, sys=8.14%, ctx=17049, majf=0, minf=1 00:10:19.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.292 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.292 issued rwts: total=17040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.292 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67898: Wed Jul 24 19:48:47 2024 00:10:19.292 read: IOPS=5000, BW=19.5MiB/s (20.5MB/s)(70.7MiB/3619msec) 00:10:19.292 slat (usec): min=11, max=17653, avg=20.53, stdev=204.31 00:10:19.292 clat (usec): min=105, max=7605, avg=177.76, stdev=107.41 00:10:19.292 lat (usec): min=143, max=23332, avg=198.28, stdev=253.55 00:10:19.292 clat percentiles (usec): 00:10:19.292 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:19.292 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:10:19.292 | 70.00th=[ 176], 80.00th=[ 208], 90.00th=[ 229], 95.00th=[ 239], 00:10:19.292 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 832], 99.95th=[ 2147], 00:10:19.292 | 99.99th=[ 6587] 00:10:19.292 bw ( KiB/s): min=15072, max=22888, per=32.96%, avg=20094.71, stdev=2973.66, samples=7 00:10:19.292 iops : min= 3768, max= 5722, avg=5023.57, stdev=743.51, samples=7 00:10:19.292 lat (usec) : 250=98.16%, 500=1.68%, 750=0.04%, 1000=0.02% 00:10:19.292 lat (msec) : 2=0.04%, 4=0.04%, 10=0.02% 00:10:19.292 cpu : usr=1.55%, sys=7.68%, ctx=18128, majf=0, minf=1 00:10:19.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.292 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.292 issued rwts: total=18098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.292 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67899: Wed Jul 24 19:48:47 2024 00:10:19.292 read: IOPS=3370, BW=13.2MiB/s (13.8MB/s)(41.3MiB/3139msec) 00:10:19.292 slat (usec): min=10, max=15598, avg=17.93, stdev=169.29 00:10:19.292 clat (usec): min=135, max=2599, avg=277.04, stdev=70.63 00:10:19.292 lat (usec): min=148, max=15807, avg=294.98, stdev=182.98 00:10:19.292 clat percentiles (usec): 00:10:19.292 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 182], 20.00th=[ 260], 00:10:19.292 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:10:19.292 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 355], 00:10:19.292 | 99.00th=[ 457], 99.50th=[ 578], 99.90th=[ 881], 99.95th=[ 1106], 00:10:19.292 | 99.99th=[ 2245] 00:10:19.292 bw ( KiB/s): min=11544, max=14032, per=21.49%, avg=13104.00, stdev=840.59, samples=6 00:10:19.292 iops : min= 2886, max= 3508, avg=3276.00, stdev=210.15, samples=6 00:10:19.292 lat (usec) : 250=13.91%, 500=85.38%, 750=0.57%, 1000=0.07% 00:10:19.292 lat (msec) : 2=0.05%, 4=0.02% 00:10:19.292 cpu : usr=1.24%, sys=4.72%, ctx=10585, majf=0, minf=1 00:10:19.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.292 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.292 issued rwts: total=10580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.292 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=67900: Wed Jul 24 19:48:47 2024 00:10:19.292 read: IOPS=3259, BW=12.7MiB/s (13.4MB/s)(36.9MiB/2897msec) 00:10:19.292 slat (usec): min=12, max=271, avg=17.23, stdev= 6.10 00:10:19.292 clat (usec): min=153, max=1835, avg=287.51, stdev=48.55 00:10:19.292 lat (usec): min=170, max=1849, avg=304.74, stdev=50.10 00:10:19.292 clat percentiles (usec): 00:10:19.292 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:10:19.292 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:10:19.292 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 351], 00:10:19.292 | 99.00th=[ 457], 99.50th=[ 562], 99.90th=[ 725], 99.95th=[ 766], 00:10:19.292 | 99.99th=[ 1844] 00:10:19.292 bw ( KiB/s): min=11488, max=13432, per=21.25%, avg=12956.80, stdev=824.75, samples=5 00:10:19.292 iops : min= 2872, max= 3358, avg=3239.20, stdev=206.19, samples=5 00:10:19.292 lat (usec) : 250=5.44%, 500=93.81%, 750=0.67%, 1000=0.05% 00:10:19.292 lat (msec) : 2=0.02% 00:10:19.292 cpu : usr=1.28%, sys=4.66%, ctx=9446, majf=0, minf=1 00:10:19.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.292 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.292 issued rwts: total=9444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.292 00:10:19.292 Run status group 0 (all jobs): 00:10:19.292 READ: bw=59.5MiB/s (62.4MB/s), 12.7MiB/s-19.8MiB/s (13.4MB/s-20.7MB/s), io=215MiB (226MB), run=2897-3619msec 00:10:19.292 00:10:19.292 Disk stats (read/write): 00:10:19.292 nvme0n1: ios=16070/0, merge=0/0, ticks=2747/0, in_queue=2747, util=95.56% 00:10:19.292 nvme0n2: ios=18098/0, merge=0/0, ticks=3218/0, in_queue=3218, util=94.76% 00:10:19.292 nvme0n3: ios=10465/0, merge=0/0, ticks=2947/0, in_queue=2947, util=96.15% 00:10:19.292 nvme0n4: ios=9351/0, merge=0/0, ticks=2761/0, in_queue=2761, util=96.73% 00:10:19.292 19:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.292 19:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:19.551 19:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.551 19:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:19.832 19:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:19.832 19:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:20.095 19:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.095 19:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:20.353 19:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.353 19:48:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 67857 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.612 nvmf hotplug test: fio failed as expected 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:20.612 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:20.871 rmmod nvme_tcp 00:10:20.871 rmmod nvme_fabrics 00:10:20.871 rmmod nvme_keyring 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 67475 ']' 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 67475 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 67475 ']' 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 67475 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67475 00:10:20.871 killing process with pid 67475 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67475' 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 67475 00:10:20.871 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 67475 00:10:21.130 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:21.130 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:21.130 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:21.130 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:21.130 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:21.130 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.130 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.130 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.389 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:21.390 ************************************ 00:10:21.390 END TEST nvmf_fio_target 00:10:21.390 ************************************ 00:10:21.390 00:10:21.390 real 0m19.285s 00:10:21.390 user 1m12.982s 00:10:21.390 sys 0m10.368s 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.390 ************************************ 00:10:21.390 START TEST nvmf_bdevio 00:10:21.390 ************************************ 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.390 * Looking for test storage... 00:10:21.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:21.390 Cannot find device "nvmf_tgt_br" 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:21.390 19:48:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:21.390 Cannot find device "nvmf_tgt_br2" 00:10:21.390 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:21.390 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:21.390 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:21.390 Cannot find device "nvmf_tgt_br" 00:10:21.390 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:21.390 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:21.390 Cannot find device "nvmf_tgt_br2" 00:10:21.390 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:21.390 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:21.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:21.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:21.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:10:21.650 00:10:21.650 --- 10.0.0.2 ping statistics --- 00:10:21.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.650 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:21.650 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:21.650 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:10:21.650 00:10:21.650 --- 10.0.0.3 ping statistics --- 00:10:21.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.650 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:21.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:10:21.650 00:10:21.650 --- 10.0.0.1 ping statistics --- 00:10:21.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.650 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=68169 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 68169 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 68169 ']' 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.650 19:48:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.909 [2024-07-24 19:48:50.355269] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:10:21.909 [2024-07-24 19:48:50.355390] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.909 [2024-07-24 19:48:50.498638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.168 [2024-07-24 19:48:50.627632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.168 [2024-07-24 19:48:50.627957] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.168 [2024-07-24 19:48:50.628125] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.168 [2024-07-24 19:48:50.628260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.168 [2024-07-24 19:48:50.628450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.168 [2024-07-24 19:48:50.628716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:22.168 [2024-07-24 19:48:50.628845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:22.168 [2024-07-24 19:48:50.628952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:22.168 [2024-07-24 19:48:50.628955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.168 [2024-07-24 19:48:50.685814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.734 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.734 [2024-07-24 19:48:51.400254] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.991 Malloc0 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.991 [2024-07-24 19:48:51.463621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.991 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:22.992 { 00:10:22.992 "params": { 00:10:22.992 "name": "Nvme$subsystem", 00:10:22.992 "trtype": "$TEST_TRANSPORT", 00:10:22.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:22.992 "adrfam": "ipv4", 00:10:22.992 "trsvcid": "$NVMF_PORT", 00:10:22.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:22.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:22.992 "hdgst": ${hdgst:-false}, 00:10:22.992 "ddgst": ${ddgst:-false} 00:10:22.992 }, 00:10:22.992 "method": "bdev_nvme_attach_controller" 00:10:22.992 } 00:10:22.992 EOF 00:10:22.992 )") 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:22.992 19:48:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:22.992 "params": { 00:10:22.992 "name": "Nvme1", 00:10:22.992 "trtype": "tcp", 00:10:22.992 "traddr": "10.0.0.2", 00:10:22.992 "adrfam": "ipv4", 00:10:22.992 "trsvcid": "4420", 00:10:22.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:22.992 "hdgst": false, 00:10:22.992 "ddgst": false 00:10:22.992 }, 00:10:22.992 "method": "bdev_nvme_attach_controller" 00:10:22.992 }' 00:10:22.992 [2024-07-24 19:48:51.523511] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:10:22.992 [2024-07-24 19:48:51.523613] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68205 ] 00:10:23.250 [2024-07-24 19:48:51.663781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.250 [2024-07-24 19:48:51.775843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.250 [2024-07-24 19:48:51.775970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.250 [2024-07-24 19:48:51.775975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.250 [2024-07-24 19:48:51.839704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:23.508 I/O targets: 00:10:23.508 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:23.508 00:10:23.508 00:10:23.508 CUnit - A unit testing framework for C - Version 2.1-3 00:10:23.508 http://cunit.sourceforge.net/ 00:10:23.508 00:10:23.508 00:10:23.508 Suite: bdevio tests on: Nvme1n1 00:10:23.508 Test: blockdev write read block ...passed 00:10:23.508 Test: blockdev write zeroes read block ...passed 00:10:23.508 Test: blockdev write zeroes read no split ...passed 00:10:23.509 Test: blockdev write zeroes read split ...passed 00:10:23.509 Test: blockdev write zeroes read split partial ...passed 00:10:23.509 Test: blockdev reset ...[2024-07-24 19:48:51.984963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:23.509 [2024-07-24 19:48:51.985272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6367c0 (9): Bad file descriptor 00:10:23.509 [2024-07-24 19:48:52.000110] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:23.509 passed 00:10:23.509 Test: blockdev write read 8 blocks ...passed 00:10:23.509 Test: blockdev write read size > 128k ...passed 00:10:23.509 Test: blockdev write read invalid size ...passed 00:10:23.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:23.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:23.509 Test: blockdev write read max offset ...passed 00:10:23.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:23.509 Test: blockdev writev readv 8 blocks ...passed 00:10:23.509 Test: blockdev writev readv 30 x 1block ...passed 00:10:23.509 Test: blockdev writev readv block ...passed 00:10:23.509 Test: blockdev writev readv size > 128k ...passed 00:10:23.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:23.509 Test: blockdev comparev and writev ...[2024-07-24 19:48:52.011029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.509 [2024-07-24 19:48:52.011254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.011283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.509 [2024-07-24 19:48:52.011393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.011701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.509 [2024-07-24 19:48:52.011719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.011959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.509 [2024-07-24 19:48:52.012387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.012986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.509 [2024-07-24 19:48:52.013016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.013036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.509 [2024-07-24 19:48:52.013047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.013336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.509 [2024-07-24 19:48:52.013358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.013375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:23.509 [2024-07-24 19:48:52.013386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:23.509 passed 00:10:23.509 Test: blockdev nvme passthru rw ...passed 00:10:23.509 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:48:52.014847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.509 [2024-07-24 19:48:52.014883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.015093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.509 [2024-07-24 19:48:52.015214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:23.509 [2024-07-24 19:48:52.015398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.509 [2024-07-24 19:48:52.015421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:23.509 passed 00:10:23.509 Test: blockdev nvme admin passthru ...[2024-07-24 19:48:52.015866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:23.509 [2024-07-24 19:48:52.015899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:23.509 passed 00:10:23.509 Test: blockdev copy ...passed 00:10:23.509 00:10:23.509 Run Summary: Type Total Ran Passed Failed Inactive 00:10:23.509 suites 1 1 n/a 0 0 00:10:23.509 tests 23 23 23 0 0 00:10:23.509 asserts 152 152 152 0 n/a 00:10:23.509 00:10:23.509 Elapsed time = 0.165 seconds 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:23.767 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.768 rmmod nvme_tcp 00:10:23.768 rmmod nvme_fabrics 00:10:23.768 rmmod nvme_keyring 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 68169 ']' 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 68169 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 68169 ']' 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 68169 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68169 00:10:23.768 killing process with pid 68169 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68169' 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 68169 00:10:23.768 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 68169 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:24.026 00:10:24.026 real 0m2.837s 00:10:24.026 user 0m9.478s 00:10:24.026 sys 0m0.773s 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.026 19:48:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.286 ************************************ 00:10:24.286 END TEST nvmf_bdevio 00:10:24.286 ************************************ 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:24.286 ************************************ 00:10:24.286 END TEST nvmf_target_core 00:10:24.286 ************************************ 00:10:24.286 00:10:24.286 real 2m33.991s 00:10:24.286 user 6m53.789s 00:10:24.286 sys 0m52.189s 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.286 19:48:52 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:24.286 19:48:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:24.286 19:48:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.286 19:48:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:24.286 ************************************ 00:10:24.286 START TEST nvmf_target_extra 00:10:24.286 ************************************ 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:24.286 * Looking for test storage... 00:10:24.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.286 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.287 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:24.287 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:24.287 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:24.287 19:48:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:24.287 19:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:24.287 19:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.287 19:48:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:24.287 ************************************ 00:10:24.287 START TEST nvmf_auth_target 00:10:24.287 ************************************ 00:10:24.287 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:24.545 * Looking for test storage... 00:10:24.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:24.545 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.545 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.546 19:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:24.546 Cannot find device "nvmf_tgt_br" 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.546 Cannot find device "nvmf_tgt_br2" 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:24.546 Cannot find device "nvmf_tgt_br" 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:24.546 Cannot find device "nvmf_tgt_br2" 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.546 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:24.546 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:24.547 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:24.547 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:24.547 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:24.547 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:24.547 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:24.547 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:24.547 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:24.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:10:24.805 00:10:24.805 --- 10.0.0.2 ping statistics --- 00:10:24.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.805 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:24.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:24.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:24.805 00:10:24.805 --- 10.0.0.3 ping statistics --- 00:10:24.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.805 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:24.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:24.805 00:10:24.805 --- 10.0.0.1 ping statistics --- 00:10:24.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.805 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=68429 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 68429 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68429 ']' 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.805 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=68461 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e7a18beece7801a41c67a4effe1b86bd0ea4f1368a354bdb 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.REn 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e7a18beece7801a41c67a4effe1b86bd0ea4f1368a354bdb 0 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e7a18beece7801a41c67a4effe1b86bd0ea4f1368a354bdb 0 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e7a18beece7801a41c67a4effe1b86bd0ea4f1368a354bdb 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.REn 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.REn 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.REn 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=abd551f1c730ea4fc907a3b120fd1a27e3a06e009ff1a70a68044bb8d793bfe8 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6Dh 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key abd551f1c730ea4fc907a3b120fd1a27e3a06e009ff1a70a68044bb8d793bfe8 3 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 abd551f1c730ea4fc907a3b120fd1a27e3a06e009ff1a70a68044bb8d793bfe8 3 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=abd551f1c730ea4fc907a3b120fd1a27e3a06e009ff1a70a68044bb8d793bfe8 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6Dh 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6Dh 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.6Dh 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.181 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bf8909d66f32fa0d0fd179a4c870b2ff 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.L9B 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bf8909d66f32fa0d0fd179a4c870b2ff 1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bf8909d66f32fa0d0fd179a4c870b2ff 1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bf8909d66f32fa0d0fd179a4c870b2ff 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.L9B 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.L9B 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.L9B 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=633e8dd41e7befb17dbdf40750384101b94a5a46624328bf 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PS2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 633e8dd41e7befb17dbdf40750384101b94a5a46624328bf 2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 633e8dd41e7befb17dbdf40750384101b94a5a46624328bf 2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=633e8dd41e7befb17dbdf40750384101b94a5a46624328bf 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PS2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PS2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.PS2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ae9f77a7c684bd8ae9b69593800c60e45ea21996380b2237 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.29m 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ae9f77a7c684bd8ae9b69593800c60e45ea21996380b2237 2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ae9f77a7c684bd8ae9b69593800c60e45ea21996380b2237 2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ae9f77a7c684bd8ae9b69593800c60e45ea21996380b2237 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.29m 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.29m 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.29m 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=957be40ce63f3416ebbf49e58919b2df 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.67W 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 957be40ce63f3416ebbf49e58919b2df 1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 957be40ce63f3416ebbf49e58919b2df 1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=957be40ce63f3416ebbf49e58919b2df 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.67W 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.67W 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.67W 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:26.182 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7326288f01d7bf32e67dddfaa7a2bfce7458bba563a1d94cdf910db056226188 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Zds 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7326288f01d7bf32e67dddfaa7a2bfce7458bba563a1d94cdf910db056226188 3 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7326288f01d7bf32e67dddfaa7a2bfce7458bba563a1d94cdf910db056226188 3 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7326288f01d7bf32e67dddfaa7a2bfce7458bba563a1d94cdf910db056226188 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Zds 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Zds 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Zds 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 68429 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68429 ']' 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.441 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 68461 /var/tmp/host.sock 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 68461 ']' 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:26.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:26.700 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.REn 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.REn 00:10:26.959 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.REn 00:10:27.217 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.6Dh ]] 00:10:27.217 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Dh 00:10:27.217 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.217 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.217 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.217 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Dh 00:10:27.217 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6Dh 00:10:27.476 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:27.476 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.L9B 00:10:27.476 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.476 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.476 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.476 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.L9B 00:10:27.476 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.L9B 00:10:27.476 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.PS2 ]] 00:10:27.476 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PS2 00:10:27.476 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.476 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.734 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.734 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PS2 00:10:27.734 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PS2 00:10:27.734 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:27.734 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.29m 00:10:27.734 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.734 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.735 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.735 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.29m 00:10:27.735 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.29m 00:10:27.993 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.67W ]] 00:10:27.993 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.67W 00:10:27.993 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.993 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.993 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.993 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.67W 00:10:27.994 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.67W 00:10:28.252 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:28.252 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Zds 00:10:28.252 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.252 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.252 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.252 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Zds 00:10:28.252 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Zds 00:10:28.510 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:28.511 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:28.511 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:28.511 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:28.511 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:28.511 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.790 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.078 00:10:29.078 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:29.078 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:29.078 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.336 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.336 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.336 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.336 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.336 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.336 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:29.336 { 00:10:29.336 "cntlid": 1, 00:10:29.336 "qid": 0, 00:10:29.336 "state": "enabled", 00:10:29.336 "thread": "nvmf_tgt_poll_group_000", 00:10:29.336 "listen_address": { 00:10:29.336 "trtype": "TCP", 00:10:29.336 "adrfam": "IPv4", 00:10:29.336 "traddr": "10.0.0.2", 00:10:29.336 "trsvcid": "4420" 00:10:29.336 }, 00:10:29.336 "peer_address": { 00:10:29.336 "trtype": "TCP", 00:10:29.336 "adrfam": "IPv4", 00:10:29.336 "traddr": "10.0.0.1", 00:10:29.336 "trsvcid": "36260" 00:10:29.336 }, 00:10:29.336 "auth": { 00:10:29.336 "state": "completed", 00:10:29.336 "digest": "sha256", 00:10:29.336 "dhgroup": "null" 00:10:29.336 } 00:10:29.336 } 00:10:29.336 ]' 00:10:29.336 19:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:29.595 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.595 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:29.595 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:29.595 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:29.595 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.595 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.595 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.853 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:10:35.119 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.119 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:35.119 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.119 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.119 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.119 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:35.119 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.119 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.119 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.120 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.120 { 00:10:35.120 "cntlid": 3, 00:10:35.120 "qid": 0, 00:10:35.120 "state": "enabled", 00:10:35.120 "thread": "nvmf_tgt_poll_group_000", 00:10:35.120 "listen_address": { 00:10:35.120 "trtype": "TCP", 00:10:35.120 "adrfam": "IPv4", 00:10:35.120 "traddr": "10.0.0.2", 00:10:35.120 "trsvcid": "4420" 00:10:35.120 }, 00:10:35.120 "peer_address": { 00:10:35.120 "trtype": "TCP", 00:10:35.120 "adrfam": "IPv4", 00:10:35.120 "traddr": "10.0.0.1", 00:10:35.120 "trsvcid": "49520" 00:10:35.120 }, 00:10:35.120 "auth": { 00:10:35.120 "state": "completed", 00:10:35.120 "digest": "sha256", 00:10:35.120 "dhgroup": "null" 00:10:35.120 } 00:10:35.120 } 00:10:35.120 ]' 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.120 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.379 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.322 19:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.889 00:10:36.889 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:36.889 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:36.889 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.148 { 00:10:37.148 "cntlid": 5, 00:10:37.148 "qid": 0, 00:10:37.148 "state": "enabled", 00:10:37.148 "thread": "nvmf_tgt_poll_group_000", 00:10:37.148 "listen_address": { 00:10:37.148 "trtype": "TCP", 00:10:37.148 "adrfam": "IPv4", 00:10:37.148 "traddr": "10.0.0.2", 00:10:37.148 "trsvcid": "4420" 00:10:37.148 }, 00:10:37.148 "peer_address": { 00:10:37.148 "trtype": "TCP", 00:10:37.148 "adrfam": "IPv4", 00:10:37.148 "traddr": "10.0.0.1", 00:10:37.148 "trsvcid": "49534" 00:10:37.148 }, 00:10:37.148 "auth": { 00:10:37.148 "state": "completed", 00:10:37.148 "digest": "sha256", 00:10:37.148 "dhgroup": "null" 00:10:37.148 } 00:10:37.148 } 00:10:37.148 ]' 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.148 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.406 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:10:37.972 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.972 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:37.972 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.972 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.972 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.972 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:37.972 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:37.972 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.537 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.538 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.538 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.538 19:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:38.538 00:10:38.796 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:38.796 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:38.796 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.111 { 00:10:39.111 "cntlid": 7, 00:10:39.111 "qid": 0, 00:10:39.111 "state": "enabled", 00:10:39.111 "thread": "nvmf_tgt_poll_group_000", 00:10:39.111 "listen_address": { 00:10:39.111 "trtype": "TCP", 00:10:39.111 "adrfam": "IPv4", 00:10:39.111 "traddr": "10.0.0.2", 00:10:39.111 "trsvcid": "4420" 00:10:39.111 }, 00:10:39.111 "peer_address": { 00:10:39.111 "trtype": "TCP", 00:10:39.111 "adrfam": "IPv4", 00:10:39.111 "traddr": "10.0.0.1", 00:10:39.111 "trsvcid": "49574" 00:10:39.111 }, 00:10:39.111 "auth": { 00:10:39.111 "state": "completed", 00:10:39.111 "digest": "sha256", 00:10:39.111 "dhgroup": "null" 00:10:39.111 } 00:10:39.111 } 00:10:39.111 ]' 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.111 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.383 19:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:10:39.949 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.949 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:39.950 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.950 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.950 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.950 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.950 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:39.950 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:39.950 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.207 19:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.465 00:10:40.723 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.723 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:40.723 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:40.980 { 00:10:40.980 "cntlid": 9, 00:10:40.980 "qid": 0, 00:10:40.980 "state": "enabled", 00:10:40.980 "thread": "nvmf_tgt_poll_group_000", 00:10:40.980 "listen_address": { 00:10:40.980 "trtype": "TCP", 00:10:40.980 "adrfam": "IPv4", 00:10:40.980 "traddr": "10.0.0.2", 00:10:40.980 "trsvcid": "4420" 00:10:40.980 }, 00:10:40.980 "peer_address": { 00:10:40.980 "trtype": "TCP", 00:10:40.980 "adrfam": "IPv4", 00:10:40.980 "traddr": "10.0.0.1", 00:10:40.980 "trsvcid": "48048" 00:10:40.980 }, 00:10:40.980 "auth": { 00:10:40.980 "state": "completed", 00:10:40.980 "digest": "sha256", 00:10:40.980 "dhgroup": "ffdhe2048" 00:10:40.980 } 00:10:40.980 } 00:10:40.980 ]' 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.980 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.238 19:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:10:41.803 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.122 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.380 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.380 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.380 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.637 00:10:42.637 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.637 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.637 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:42.896 { 00:10:42.896 "cntlid": 11, 00:10:42.896 "qid": 0, 00:10:42.896 "state": "enabled", 00:10:42.896 "thread": "nvmf_tgt_poll_group_000", 00:10:42.896 "listen_address": { 00:10:42.896 "trtype": "TCP", 00:10:42.896 "adrfam": "IPv4", 00:10:42.896 "traddr": "10.0.0.2", 00:10:42.896 "trsvcid": "4420" 00:10:42.896 }, 00:10:42.896 "peer_address": { 00:10:42.896 "trtype": "TCP", 00:10:42.896 "adrfam": "IPv4", 00:10:42.896 "traddr": "10.0.0.1", 00:10:42.896 "trsvcid": "48074" 00:10:42.896 }, 00:10:42.896 "auth": { 00:10:42.896 "state": "completed", 00:10:42.896 "digest": "sha256", 00:10:42.896 "dhgroup": "ffdhe2048" 00:10:42.896 } 00:10:42.896 } 00:10:42.896 ]' 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.896 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.463 19:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:10:44.030 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.030 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:44.030 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.030 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.030 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.030 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.030 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.030 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.288 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.547 00:10:44.547 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.547 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.547 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.805 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.805 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.805 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.805 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.805 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.805 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.805 { 00:10:44.805 "cntlid": 13, 00:10:44.805 "qid": 0, 00:10:44.805 "state": "enabled", 00:10:44.805 "thread": "nvmf_tgt_poll_group_000", 00:10:44.805 "listen_address": { 00:10:44.805 "trtype": "TCP", 00:10:44.805 "adrfam": "IPv4", 00:10:44.805 "traddr": "10.0.0.2", 00:10:44.805 "trsvcid": "4420" 00:10:44.805 }, 00:10:44.805 "peer_address": { 00:10:44.805 "trtype": "TCP", 00:10:44.805 "adrfam": "IPv4", 00:10:44.805 "traddr": "10.0.0.1", 00:10:44.805 "trsvcid": "48112" 00:10:44.805 }, 00:10:44.805 "auth": { 00:10:44.805 "state": "completed", 00:10:44.805 "digest": "sha256", 00:10:44.805 "dhgroup": "ffdhe2048" 00:10:44.805 } 00:10:44.805 } 00:10:44.805 ]' 00:10:44.805 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.063 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.063 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.063 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:45.063 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.063 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.063 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.063 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.322 19:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:10:45.942 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.942 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:45.942 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.942 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.942 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.942 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.942 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:45.942 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:46.201 19:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:46.459 00:10:46.717 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:46.717 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.717 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:46.717 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.717 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.717 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.717 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:46.976 { 00:10:46.976 "cntlid": 15, 00:10:46.976 "qid": 0, 00:10:46.976 "state": "enabled", 00:10:46.976 "thread": "nvmf_tgt_poll_group_000", 00:10:46.976 "listen_address": { 00:10:46.976 "trtype": "TCP", 00:10:46.976 "adrfam": "IPv4", 00:10:46.976 "traddr": "10.0.0.2", 00:10:46.976 "trsvcid": "4420" 00:10:46.976 }, 00:10:46.976 "peer_address": { 00:10:46.976 "trtype": "TCP", 00:10:46.976 "adrfam": "IPv4", 00:10:46.976 "traddr": "10.0.0.1", 00:10:46.976 "trsvcid": "48134" 00:10:46.976 }, 00:10:46.976 "auth": { 00:10:46.976 "state": "completed", 00:10:46.976 "digest": "sha256", 00:10:46.976 "dhgroup": "ffdhe2048" 00:10:46.976 } 00:10:46.976 } 00:10:46.976 ]' 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.976 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.234 19:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.168 19:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.426 00:10:48.426 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:48.426 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.426 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:48.685 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.685 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.685 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.685 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:48.943 { 00:10:48.943 "cntlid": 17, 00:10:48.943 "qid": 0, 00:10:48.943 "state": "enabled", 00:10:48.943 "thread": "nvmf_tgt_poll_group_000", 00:10:48.943 "listen_address": { 00:10:48.943 "trtype": "TCP", 00:10:48.943 "adrfam": "IPv4", 00:10:48.943 "traddr": "10.0.0.2", 00:10:48.943 "trsvcid": "4420" 00:10:48.943 }, 00:10:48.943 "peer_address": { 00:10:48.943 "trtype": "TCP", 00:10:48.943 "adrfam": "IPv4", 00:10:48.943 "traddr": "10.0.0.1", 00:10:48.943 "trsvcid": "48146" 00:10:48.943 }, 00:10:48.943 "auth": { 00:10:48.943 "state": "completed", 00:10:48.943 "digest": "sha256", 00:10:48.943 "dhgroup": "ffdhe3072" 00:10:48.943 } 00:10:48.943 } 00:10:48.943 ]' 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.943 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.200 19:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.135 19:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.392 00:10:50.392 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:50.392 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.392 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.651 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.651 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.651 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.651 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.651 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.651 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.651 { 00:10:50.651 "cntlid": 19, 00:10:50.651 "qid": 0, 00:10:50.651 "state": "enabled", 00:10:50.651 "thread": "nvmf_tgt_poll_group_000", 00:10:50.651 "listen_address": { 00:10:50.651 "trtype": "TCP", 00:10:50.651 "adrfam": "IPv4", 00:10:50.651 "traddr": "10.0.0.2", 00:10:50.651 "trsvcid": "4420" 00:10:50.651 }, 00:10:50.651 "peer_address": { 00:10:50.651 "trtype": "TCP", 00:10:50.651 "adrfam": "IPv4", 00:10:50.651 "traddr": "10.0.0.1", 00:10:50.651 "trsvcid": "44094" 00:10:50.651 }, 00:10:50.651 "auth": { 00:10:50.651 "state": "completed", 00:10:50.651 "digest": "sha256", 00:10:50.651 "dhgroup": "ffdhe3072" 00:10:50.651 } 00:10:50.651 } 00:10:50.651 ]' 00:10:50.651 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.909 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.909 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.909 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:50.909 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.909 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.909 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.909 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.167 19:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:10:51.733 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.733 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:51.733 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.733 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.733 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.733 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.733 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.733 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.991 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.557 00:10:52.557 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:52.557 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:52.557 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.557 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.557 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.557 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.557 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.557 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.557 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.557 { 00:10:52.557 "cntlid": 21, 00:10:52.557 "qid": 0, 00:10:52.557 "state": "enabled", 00:10:52.557 "thread": "nvmf_tgt_poll_group_000", 00:10:52.557 "listen_address": { 00:10:52.557 "trtype": "TCP", 00:10:52.557 "adrfam": "IPv4", 00:10:52.557 "traddr": "10.0.0.2", 00:10:52.557 "trsvcid": "4420" 00:10:52.557 }, 00:10:52.557 "peer_address": { 00:10:52.557 "trtype": "TCP", 00:10:52.557 "adrfam": "IPv4", 00:10:52.557 "traddr": "10.0.0.1", 00:10:52.557 "trsvcid": "44114" 00:10:52.557 }, 00:10:52.557 "auth": { 00:10:52.557 "state": "completed", 00:10:52.557 "digest": "sha256", 00:10:52.557 "dhgroup": "ffdhe3072" 00:10:52.557 } 00:10:52.557 } 00:10:52.557 ]' 00:10:52.557 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.816 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.816 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.816 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:52.816 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.816 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.816 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.816 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.074 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:10:53.640 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.640 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:53.640 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.640 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.640 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.640 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.640 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:53.640 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:53.899 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:54.465 00:10:54.465 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.465 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.465 19:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.724 { 00:10:54.724 "cntlid": 23, 00:10:54.724 "qid": 0, 00:10:54.724 "state": "enabled", 00:10:54.724 "thread": "nvmf_tgt_poll_group_000", 00:10:54.724 "listen_address": { 00:10:54.724 "trtype": "TCP", 00:10:54.724 "adrfam": "IPv4", 00:10:54.724 "traddr": "10.0.0.2", 00:10:54.724 "trsvcid": "4420" 00:10:54.724 }, 00:10:54.724 "peer_address": { 00:10:54.724 "trtype": "TCP", 00:10:54.724 "adrfam": "IPv4", 00:10:54.724 "traddr": "10.0.0.1", 00:10:54.724 "trsvcid": "44132" 00:10:54.724 }, 00:10:54.724 "auth": { 00:10:54.724 "state": "completed", 00:10:54.724 "digest": "sha256", 00:10:54.724 "dhgroup": "ffdhe3072" 00:10:54.724 } 00:10:54.724 } 00:10:54.724 ]' 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:54.724 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.981 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.981 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.981 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.239 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:55.805 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.063 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.321 00:10:56.321 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.322 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.322 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.888 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.888 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.888 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.889 { 00:10:56.889 "cntlid": 25, 00:10:56.889 "qid": 0, 00:10:56.889 "state": "enabled", 00:10:56.889 "thread": "nvmf_tgt_poll_group_000", 00:10:56.889 "listen_address": { 00:10:56.889 "trtype": "TCP", 00:10:56.889 "adrfam": "IPv4", 00:10:56.889 "traddr": "10.0.0.2", 00:10:56.889 "trsvcid": "4420" 00:10:56.889 }, 00:10:56.889 "peer_address": { 00:10:56.889 "trtype": "TCP", 00:10:56.889 "adrfam": "IPv4", 00:10:56.889 "traddr": "10.0.0.1", 00:10:56.889 "trsvcid": "44158" 00:10:56.889 }, 00:10:56.889 "auth": { 00:10:56.889 "state": "completed", 00:10:56.889 "digest": "sha256", 00:10:56.889 "dhgroup": "ffdhe4096" 00:10:56.889 } 00:10:56.889 } 00:10:56.889 ]' 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.889 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.148 19:49:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:10:57.715 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.715 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:57.715 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.715 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:57.972 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:57.973 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.973 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.973 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.973 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.973 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.973 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.973 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.540 00:10:58.540 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.540 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.540 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.798 { 00:10:58.798 "cntlid": 27, 00:10:58.798 "qid": 0, 00:10:58.798 "state": "enabled", 00:10:58.798 "thread": "nvmf_tgt_poll_group_000", 00:10:58.798 "listen_address": { 00:10:58.798 "trtype": "TCP", 00:10:58.798 "adrfam": "IPv4", 00:10:58.798 "traddr": "10.0.0.2", 00:10:58.798 "trsvcid": "4420" 00:10:58.798 }, 00:10:58.798 "peer_address": { 00:10:58.798 "trtype": "TCP", 00:10:58.798 "adrfam": "IPv4", 00:10:58.798 "traddr": "10.0.0.1", 00:10:58.798 "trsvcid": "44190" 00:10:58.798 }, 00:10:58.798 "auth": { 00:10:58.798 "state": "completed", 00:10:58.798 "digest": "sha256", 00:10:58.798 "dhgroup": "ffdhe4096" 00:10:58.798 } 00:10:58.798 } 00:10:58.798 ]' 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.798 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.057 19:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:10:59.622 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.881 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:10:59.881 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.881 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.881 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.881 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:59.881 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:59.881 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.140 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.399 00:11:00.399 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.399 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.399 19:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.657 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:00.658 { 00:11:00.658 "cntlid": 29, 00:11:00.658 "qid": 0, 00:11:00.658 "state": "enabled", 00:11:00.658 "thread": "nvmf_tgt_poll_group_000", 00:11:00.658 "listen_address": { 00:11:00.658 "trtype": "TCP", 00:11:00.658 "adrfam": "IPv4", 00:11:00.658 "traddr": "10.0.0.2", 00:11:00.658 "trsvcid": "4420" 00:11:00.658 }, 00:11:00.658 "peer_address": { 00:11:00.658 "trtype": "TCP", 00:11:00.658 "adrfam": "IPv4", 00:11:00.658 "traddr": "10.0.0.1", 00:11:00.658 "trsvcid": "50776" 00:11:00.658 }, 00:11:00.658 "auth": { 00:11:00.658 "state": "completed", 00:11:00.658 "digest": "sha256", 00:11:00.658 "dhgroup": "ffdhe4096" 00:11:00.658 } 00:11:00.658 } 00:11:00.658 ]' 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:00.658 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:00.922 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.922 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.922 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.180 19:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:11:01.748 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.748 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:01.748 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.748 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.748 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.748 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.748 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:01.748 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.006 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.007 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.007 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.007 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:02.571 00:11:02.571 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:02.571 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.571 19:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:02.830 { 00:11:02.830 "cntlid": 31, 00:11:02.830 "qid": 0, 00:11:02.830 "state": "enabled", 00:11:02.830 "thread": "nvmf_tgt_poll_group_000", 00:11:02.830 "listen_address": { 00:11:02.830 "trtype": "TCP", 00:11:02.830 "adrfam": "IPv4", 00:11:02.830 "traddr": "10.0.0.2", 00:11:02.830 "trsvcid": "4420" 00:11:02.830 }, 00:11:02.830 "peer_address": { 00:11:02.830 "trtype": "TCP", 00:11:02.830 "adrfam": "IPv4", 00:11:02.830 "traddr": "10.0.0.1", 00:11:02.830 "trsvcid": "50806" 00:11:02.830 }, 00:11:02.830 "auth": { 00:11:02.830 "state": "completed", 00:11:02.830 "digest": "sha256", 00:11:02.830 "dhgroup": "ffdhe4096" 00:11:02.830 } 00:11:02.830 } 00:11:02.830 ]' 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.830 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.088 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.023 19:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.590 00:11:04.590 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:04.590 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:04.590 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.848 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.848 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.848 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.848 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.848 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.848 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:04.848 { 00:11:04.848 "cntlid": 33, 00:11:04.848 "qid": 0, 00:11:04.848 "state": "enabled", 00:11:04.848 "thread": "nvmf_tgt_poll_group_000", 00:11:04.849 "listen_address": { 00:11:04.849 "trtype": "TCP", 00:11:04.849 "adrfam": "IPv4", 00:11:04.849 "traddr": "10.0.0.2", 00:11:04.849 "trsvcid": "4420" 00:11:04.849 }, 00:11:04.849 "peer_address": { 00:11:04.849 "trtype": "TCP", 00:11:04.849 "adrfam": "IPv4", 00:11:04.849 "traddr": "10.0.0.1", 00:11:04.849 "trsvcid": "50828" 00:11:04.849 }, 00:11:04.849 "auth": { 00:11:04.849 "state": "completed", 00:11:04.849 "digest": "sha256", 00:11:04.849 "dhgroup": "ffdhe6144" 00:11:04.849 } 00:11:04.849 } 00:11:04.849 ]' 00:11:04.849 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:04.849 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:04.849 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:04.849 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:04.849 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:04.849 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:04.849 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:04.849 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.107 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.041 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.299 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.299 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.299 19:49:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.557 00:11:06.557 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:06.557 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.557 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:06.816 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:06.816 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:06.816 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.816 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.816 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.816 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:06.816 { 00:11:06.816 "cntlid": 35, 00:11:06.816 "qid": 0, 00:11:06.816 "state": "enabled", 00:11:06.816 "thread": "nvmf_tgt_poll_group_000", 00:11:06.816 "listen_address": { 00:11:06.816 "trtype": "TCP", 00:11:06.816 "adrfam": "IPv4", 00:11:06.816 "traddr": "10.0.0.2", 00:11:06.816 "trsvcid": "4420" 00:11:06.816 }, 00:11:06.816 "peer_address": { 00:11:06.816 "trtype": "TCP", 00:11:06.816 "adrfam": "IPv4", 00:11:06.816 "traddr": "10.0.0.1", 00:11:06.816 "trsvcid": "50852" 00:11:06.816 }, 00:11:06.816 "auth": { 00:11:06.816 "state": "completed", 00:11:06.816 "digest": "sha256", 00:11:06.816 "dhgroup": "ffdhe6144" 00:11:06.816 } 00:11:06.816 } 00:11:06.816 ]' 00:11:06.816 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.074 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.074 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.074 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:07.074 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.074 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.074 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.074 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.333 19:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:11:07.899 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.899 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:07.899 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.899 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.899 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.899 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.899 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:07.899 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.465 19:49:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.723 00:11:08.723 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.723 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.723 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.980 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.980 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.980 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.980 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.980 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.980 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.980 { 00:11:08.980 "cntlid": 37, 00:11:08.980 "qid": 0, 00:11:08.980 "state": "enabled", 00:11:08.980 "thread": "nvmf_tgt_poll_group_000", 00:11:08.980 "listen_address": { 00:11:08.980 "trtype": "TCP", 00:11:08.980 "adrfam": "IPv4", 00:11:08.980 "traddr": "10.0.0.2", 00:11:08.980 "trsvcid": "4420" 00:11:08.980 }, 00:11:08.980 "peer_address": { 00:11:08.980 "trtype": "TCP", 00:11:08.980 "adrfam": "IPv4", 00:11:08.980 "traddr": "10.0.0.1", 00:11:08.980 "trsvcid": "50872" 00:11:08.980 }, 00:11:08.980 "auth": { 00:11:08.980 "state": "completed", 00:11:08.980 "digest": "sha256", 00:11:08.980 "dhgroup": "ffdhe6144" 00:11:08.980 } 00:11:08.980 } 00:11:08.980 ]' 00:11:08.980 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.238 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.238 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.238 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:09.238 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:09.238 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.238 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.238 19:49:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.495 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.430 19:49:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.430 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.430 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.430 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.997 00:11:10.997 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.997 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.997 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:11.256 { 00:11:11.256 "cntlid": 39, 00:11:11.256 "qid": 0, 00:11:11.256 "state": "enabled", 00:11:11.256 "thread": "nvmf_tgt_poll_group_000", 00:11:11.256 "listen_address": { 00:11:11.256 "trtype": "TCP", 00:11:11.256 "adrfam": "IPv4", 00:11:11.256 "traddr": "10.0.0.2", 00:11:11.256 "trsvcid": "4420" 00:11:11.256 }, 00:11:11.256 "peer_address": { 00:11:11.256 "trtype": "TCP", 00:11:11.256 "adrfam": "IPv4", 00:11:11.256 "traddr": "10.0.0.1", 00:11:11.256 "trsvcid": "60550" 00:11:11.256 }, 00:11:11.256 "auth": { 00:11:11.256 "state": "completed", 00:11:11.256 "digest": "sha256", 00:11:11.256 "dhgroup": "ffdhe6144" 00:11:11.256 } 00:11:11.256 } 00:11:11.256 ]' 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.256 19:49:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.514 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:11:12.080 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.338 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:12.338 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.338 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.338 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.338 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:12.338 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:12.338 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:12.338 19:49:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.597 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.164 00:11:13.164 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:13.164 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:13.164 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:13.423 { 00:11:13.423 "cntlid": 41, 00:11:13.423 "qid": 0, 00:11:13.423 "state": "enabled", 00:11:13.423 "thread": "nvmf_tgt_poll_group_000", 00:11:13.423 "listen_address": { 00:11:13.423 "trtype": "TCP", 00:11:13.423 "adrfam": "IPv4", 00:11:13.423 "traddr": "10.0.0.2", 00:11:13.423 "trsvcid": "4420" 00:11:13.423 }, 00:11:13.423 "peer_address": { 00:11:13.423 "trtype": "TCP", 00:11:13.423 "adrfam": "IPv4", 00:11:13.423 "traddr": "10.0.0.1", 00:11:13.423 "trsvcid": "60574" 00:11:13.423 }, 00:11:13.423 "auth": { 00:11:13.423 "state": "completed", 00:11:13.423 "digest": "sha256", 00:11:13.423 "dhgroup": "ffdhe8192" 00:11:13.423 } 00:11:13.423 } 00:11:13.423 ]' 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:13.423 19:49:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:13.423 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:13.423 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:13.423 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.423 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.423 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.682 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:11:14.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:14.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:14.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:14.618 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.874 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.875 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.440 00:11:15.440 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:15.440 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:15.440 19:49:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:15.698 { 00:11:15.698 "cntlid": 43, 00:11:15.698 "qid": 0, 00:11:15.698 "state": "enabled", 00:11:15.698 "thread": "nvmf_tgt_poll_group_000", 00:11:15.698 "listen_address": { 00:11:15.698 "trtype": "TCP", 00:11:15.698 "adrfam": "IPv4", 00:11:15.698 "traddr": "10.0.0.2", 00:11:15.698 "trsvcid": "4420" 00:11:15.698 }, 00:11:15.698 "peer_address": { 00:11:15.698 "trtype": "TCP", 00:11:15.698 "adrfam": "IPv4", 00:11:15.698 "traddr": "10.0.0.1", 00:11:15.698 "trsvcid": "60592" 00:11:15.698 }, 00:11:15.698 "auth": { 00:11:15.698 "state": "completed", 00:11:15.698 "digest": "sha256", 00:11:15.698 "dhgroup": "ffdhe8192" 00:11:15.698 } 00:11:15.698 } 00:11:15.698 ]' 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:15.698 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:15.957 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.957 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.957 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.957 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:11:16.892 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.892 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:16.892 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.892 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.892 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.892 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:16.892 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:16.892 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.213 19:49:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.796 00:11:17.796 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:17.796 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:17.796 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.055 { 00:11:18.055 "cntlid": 45, 00:11:18.055 "qid": 0, 00:11:18.055 "state": "enabled", 00:11:18.055 "thread": "nvmf_tgt_poll_group_000", 00:11:18.055 "listen_address": { 00:11:18.055 "trtype": "TCP", 00:11:18.055 "adrfam": "IPv4", 00:11:18.055 "traddr": "10.0.0.2", 00:11:18.055 "trsvcid": "4420" 00:11:18.055 }, 00:11:18.055 "peer_address": { 00:11:18.055 "trtype": "TCP", 00:11:18.055 "adrfam": "IPv4", 00:11:18.055 "traddr": "10.0.0.1", 00:11:18.055 "trsvcid": "60618" 00:11:18.055 }, 00:11:18.055 "auth": { 00:11:18.055 "state": "completed", 00:11:18.055 "digest": "sha256", 00:11:18.055 "dhgroup": "ffdhe8192" 00:11:18.055 } 00:11:18.055 } 00:11:18.055 ]' 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.055 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.313 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:18.313 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.313 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.313 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.313 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.572 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:11:19.138 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.138 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:19.138 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.138 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.138 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.138 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.138 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:19.138 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:19.397 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:20.331 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.331 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.331 { 00:11:20.331 "cntlid": 47, 00:11:20.331 "qid": 0, 00:11:20.331 "state": "enabled", 00:11:20.331 "thread": "nvmf_tgt_poll_group_000", 00:11:20.331 "listen_address": { 00:11:20.331 "trtype": "TCP", 00:11:20.331 "adrfam": "IPv4", 00:11:20.331 "traddr": "10.0.0.2", 00:11:20.332 "trsvcid": "4420" 00:11:20.332 }, 00:11:20.332 "peer_address": { 00:11:20.332 "trtype": "TCP", 00:11:20.332 "adrfam": "IPv4", 00:11:20.332 "traddr": "10.0.0.1", 00:11:20.332 "trsvcid": "60650" 00:11:20.332 }, 00:11:20.332 "auth": { 00:11:20.332 "state": "completed", 00:11:20.332 "digest": "sha256", 00:11:20.332 "dhgroup": "ffdhe8192" 00:11:20.332 } 00:11:20.332 } 00:11:20.332 ]' 00:11:20.332 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.332 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.332 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.661 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:20.661 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.661 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.661 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.661 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.918 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:21.485 19:49:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.745 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.003 00:11:22.003 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.003 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.003 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.262 { 00:11:22.262 "cntlid": 49, 00:11:22.262 "qid": 0, 00:11:22.262 "state": "enabled", 00:11:22.262 "thread": "nvmf_tgt_poll_group_000", 00:11:22.262 "listen_address": { 00:11:22.262 "trtype": "TCP", 00:11:22.262 "adrfam": "IPv4", 00:11:22.262 "traddr": "10.0.0.2", 00:11:22.262 "trsvcid": "4420" 00:11:22.262 }, 00:11:22.262 "peer_address": { 00:11:22.262 "trtype": "TCP", 00:11:22.262 "adrfam": "IPv4", 00:11:22.262 "traddr": "10.0.0.1", 00:11:22.262 "trsvcid": "56542" 00:11:22.262 }, 00:11:22.262 "auth": { 00:11:22.262 "state": "completed", 00:11:22.262 "digest": "sha384", 00:11:22.262 "dhgroup": "null" 00:11:22.262 } 00:11:22.262 } 00:11:22.262 ]' 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:22.262 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.519 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.519 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.519 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.777 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:11:23.342 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.342 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:23.342 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.342 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.342 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.342 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:23.342 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:23.342 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.601 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.910 00:11:23.910 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.910 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.910 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.168 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.168 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.168 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.168 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.168 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.168 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.168 { 00:11:24.168 "cntlid": 51, 00:11:24.168 "qid": 0, 00:11:24.168 "state": "enabled", 00:11:24.168 "thread": "nvmf_tgt_poll_group_000", 00:11:24.168 "listen_address": { 00:11:24.168 "trtype": "TCP", 00:11:24.168 "adrfam": "IPv4", 00:11:24.168 "traddr": "10.0.0.2", 00:11:24.168 "trsvcid": "4420" 00:11:24.168 }, 00:11:24.168 "peer_address": { 00:11:24.168 "trtype": "TCP", 00:11:24.168 "adrfam": "IPv4", 00:11:24.168 "traddr": "10.0.0.1", 00:11:24.168 "trsvcid": "56562" 00:11:24.168 }, 00:11:24.168 "auth": { 00:11:24.168 "state": "completed", 00:11:24.168 "digest": "sha384", 00:11:24.168 "dhgroup": "null" 00:11:24.168 } 00:11:24.168 } 00:11:24.168 ]' 00:11:24.168 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.427 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.427 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.427 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:24.427 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.427 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.427 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.427 19:49:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.686 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:11:25.250 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.250 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:25.250 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.250 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.250 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.250 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.250 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:25.250 19:49:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.508 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.509 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.509 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.767 00:11:26.026 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.026 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.026 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.026 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.026 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.026 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.026 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.284 { 00:11:26.284 "cntlid": 53, 00:11:26.284 "qid": 0, 00:11:26.284 "state": "enabled", 00:11:26.284 "thread": "nvmf_tgt_poll_group_000", 00:11:26.284 "listen_address": { 00:11:26.284 "trtype": "TCP", 00:11:26.284 "adrfam": "IPv4", 00:11:26.284 "traddr": "10.0.0.2", 00:11:26.284 "trsvcid": "4420" 00:11:26.284 }, 00:11:26.284 "peer_address": { 00:11:26.284 "trtype": "TCP", 00:11:26.284 "adrfam": "IPv4", 00:11:26.284 "traddr": "10.0.0.1", 00:11:26.284 "trsvcid": "56572" 00:11:26.284 }, 00:11:26.284 "auth": { 00:11:26.284 "state": "completed", 00:11:26.284 "digest": "sha384", 00:11:26.284 "dhgroup": "null" 00:11:26.284 } 00:11:26.284 } 00:11:26.284 ]' 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.284 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.543 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:11:27.110 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.110 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:27.110 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.110 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.369 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.369 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.369 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:27.369 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.632 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:27.891 00:11:27.891 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.891 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.891 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.150 { 00:11:28.150 "cntlid": 55, 00:11:28.150 "qid": 0, 00:11:28.150 "state": "enabled", 00:11:28.150 "thread": "nvmf_tgt_poll_group_000", 00:11:28.150 "listen_address": { 00:11:28.150 "trtype": "TCP", 00:11:28.150 "adrfam": "IPv4", 00:11:28.150 "traddr": "10.0.0.2", 00:11:28.150 "trsvcid": "4420" 00:11:28.150 }, 00:11:28.150 "peer_address": { 00:11:28.150 "trtype": "TCP", 00:11:28.150 "adrfam": "IPv4", 00:11:28.150 "traddr": "10.0.0.1", 00:11:28.150 "trsvcid": "56590" 00:11:28.150 }, 00:11:28.150 "auth": { 00:11:28.150 "state": "completed", 00:11:28.150 "digest": "sha384", 00:11:28.150 "dhgroup": "null" 00:11:28.150 } 00:11:28.150 } 00:11:28.150 ]' 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.150 19:49:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.717 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:11:29.284 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.285 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:29.285 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.285 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.285 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.285 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:29.285 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.285 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:29.285 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.543 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.110 00:11:30.111 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.111 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.111 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.111 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.111 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.111 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.111 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.371 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.371 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.371 { 00:11:30.371 "cntlid": 57, 00:11:30.371 "qid": 0, 00:11:30.371 "state": "enabled", 00:11:30.371 "thread": "nvmf_tgt_poll_group_000", 00:11:30.371 "listen_address": { 00:11:30.371 "trtype": "TCP", 00:11:30.371 "adrfam": "IPv4", 00:11:30.371 "traddr": "10.0.0.2", 00:11:30.371 "trsvcid": "4420" 00:11:30.371 }, 00:11:30.371 "peer_address": { 00:11:30.371 "trtype": "TCP", 00:11:30.371 "adrfam": "IPv4", 00:11:30.371 "traddr": "10.0.0.1", 00:11:30.371 "trsvcid": "45994" 00:11:30.371 }, 00:11:30.371 "auth": { 00:11:30.371 "state": "completed", 00:11:30.371 "digest": "sha384", 00:11:30.371 "dhgroup": "ffdhe2048" 00:11:30.371 } 00:11:30.371 } 00:11:30.371 ]' 00:11:30.372 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:30.372 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.372 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:30.372 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:30.372 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:30.372 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.372 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.372 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.631 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:11:31.566 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.566 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:31.566 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.566 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.566 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.566 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.566 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:31.566 19:49:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.566 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.134 00:11:32.134 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.134 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.134 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:32.392 { 00:11:32.392 "cntlid": 59, 00:11:32.392 "qid": 0, 00:11:32.392 "state": "enabled", 00:11:32.392 "thread": "nvmf_tgt_poll_group_000", 00:11:32.392 "listen_address": { 00:11:32.392 "trtype": "TCP", 00:11:32.392 "adrfam": "IPv4", 00:11:32.392 "traddr": "10.0.0.2", 00:11:32.392 "trsvcid": "4420" 00:11:32.392 }, 00:11:32.392 "peer_address": { 00:11:32.392 "trtype": "TCP", 00:11:32.392 "adrfam": "IPv4", 00:11:32.392 "traddr": "10.0.0.1", 00:11:32.392 "trsvcid": "46008" 00:11:32.392 }, 00:11:32.392 "auth": { 00:11:32.392 "state": "completed", 00:11:32.392 "digest": "sha384", 00:11:32.392 "dhgroup": "ffdhe2048" 00:11:32.392 } 00:11:32.392 } 00:11:32.392 ]' 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:32.392 19:50:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:32.392 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.392 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.392 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.958 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:11:33.525 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.525 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:33.525 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.525 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.525 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.525 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.525 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:33.525 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:33.842 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:33.842 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.842 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:33.842 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:33.842 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:33.842 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.843 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.843 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.843 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.843 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.843 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:33.843 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.112 00:11:34.112 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:34.112 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:34.112 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.371 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.371 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.371 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.371 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.371 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.371 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:34.371 { 00:11:34.371 "cntlid": 61, 00:11:34.371 "qid": 0, 00:11:34.371 "state": "enabled", 00:11:34.371 "thread": "nvmf_tgt_poll_group_000", 00:11:34.371 "listen_address": { 00:11:34.371 "trtype": "TCP", 00:11:34.371 "adrfam": "IPv4", 00:11:34.371 "traddr": "10.0.0.2", 00:11:34.371 "trsvcid": "4420" 00:11:34.371 }, 00:11:34.371 "peer_address": { 00:11:34.371 "trtype": "TCP", 00:11:34.371 "adrfam": "IPv4", 00:11:34.371 "traddr": "10.0.0.1", 00:11:34.371 "trsvcid": "46044" 00:11:34.371 }, 00:11:34.371 "auth": { 00:11:34.371 "state": "completed", 00:11:34.371 "digest": "sha384", 00:11:34.371 "dhgroup": "ffdhe2048" 00:11:34.371 } 00:11:34.371 } 00:11:34.371 ]' 00:11:34.371 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:34.630 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.630 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:34.630 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:34.630 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:34.630 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.630 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.630 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.894 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:11:35.463 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.463 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:35.463 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.463 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.463 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.463 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:35.463 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:35.463 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:35.722 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:36.288 00:11:36.288 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.288 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.288 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.546 19:50:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.546 { 00:11:36.546 "cntlid": 63, 00:11:36.546 "qid": 0, 00:11:36.546 "state": "enabled", 00:11:36.546 "thread": "nvmf_tgt_poll_group_000", 00:11:36.546 "listen_address": { 00:11:36.546 "trtype": "TCP", 00:11:36.546 "adrfam": "IPv4", 00:11:36.546 "traddr": "10.0.0.2", 00:11:36.546 "trsvcid": "4420" 00:11:36.546 }, 00:11:36.546 "peer_address": { 00:11:36.546 "trtype": "TCP", 00:11:36.546 "adrfam": "IPv4", 00:11:36.546 "traddr": "10.0.0.1", 00:11:36.546 "trsvcid": "46070" 00:11:36.546 }, 00:11:36.546 "auth": { 00:11:36.546 "state": "completed", 00:11:36.546 "digest": "sha384", 00:11:36.546 "dhgroup": "ffdhe2048" 00:11:36.546 } 00:11:36.546 } 00:11:36.546 ]' 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.546 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.805 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:37.745 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.746 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.746 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.746 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.746 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.746 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.746 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.311 00:11:38.311 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.311 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:38.311 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:38.570 { 00:11:38.570 "cntlid": 65, 00:11:38.570 "qid": 0, 00:11:38.570 "state": "enabled", 00:11:38.570 "thread": "nvmf_tgt_poll_group_000", 00:11:38.570 "listen_address": { 00:11:38.570 "trtype": "TCP", 00:11:38.570 "adrfam": "IPv4", 00:11:38.570 "traddr": "10.0.0.2", 00:11:38.570 "trsvcid": "4420" 00:11:38.570 }, 00:11:38.570 "peer_address": { 00:11:38.570 "trtype": "TCP", 00:11:38.570 "adrfam": "IPv4", 00:11:38.570 "traddr": "10.0.0.1", 00:11:38.570 "trsvcid": "46106" 00:11:38.570 }, 00:11:38.570 "auth": { 00:11:38.570 "state": "completed", 00:11:38.570 "digest": "sha384", 00:11:38.570 "dhgroup": "ffdhe3072" 00:11:38.570 } 00:11:38.570 } 00:11:38.570 ]' 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.570 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.828 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.762 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.763 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.763 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.763 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.763 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.763 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.330 00:11:40.330 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.330 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.330 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.330 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.330 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.330 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.330 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.591 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.591 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:40.591 { 00:11:40.591 "cntlid": 67, 00:11:40.591 "qid": 0, 00:11:40.591 "state": "enabled", 00:11:40.591 "thread": "nvmf_tgt_poll_group_000", 00:11:40.591 "listen_address": { 00:11:40.591 "trtype": "TCP", 00:11:40.591 "adrfam": "IPv4", 00:11:40.591 "traddr": "10.0.0.2", 00:11:40.591 "trsvcid": "4420" 00:11:40.591 }, 00:11:40.591 "peer_address": { 00:11:40.591 "trtype": "TCP", 00:11:40.591 "adrfam": "IPv4", 00:11:40.591 "traddr": "10.0.0.1", 00:11:40.591 "trsvcid": "39544" 00:11:40.591 }, 00:11:40.591 "auth": { 00:11:40.591 "state": "completed", 00:11:40.591 "digest": "sha384", 00:11:40.591 "dhgroup": "ffdhe3072" 00:11:40.591 } 00:11:40.591 } 00:11:40.591 ]' 00:11:40.591 19:50:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:40.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:40.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:40.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:40.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.591 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.849 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.785 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.351 00:11:42.351 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:42.351 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.351 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:42.351 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.351 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.351 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.351 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.351 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:42.609 { 00:11:42.609 "cntlid": 69, 00:11:42.609 "qid": 0, 00:11:42.609 "state": "enabled", 00:11:42.609 "thread": "nvmf_tgt_poll_group_000", 00:11:42.609 "listen_address": { 00:11:42.609 "trtype": "TCP", 00:11:42.609 "adrfam": "IPv4", 00:11:42.609 "traddr": "10.0.0.2", 00:11:42.609 "trsvcid": "4420" 00:11:42.609 }, 00:11:42.609 "peer_address": { 00:11:42.609 "trtype": "TCP", 00:11:42.609 "adrfam": "IPv4", 00:11:42.609 "traddr": "10.0.0.1", 00:11:42.609 "trsvcid": "39566" 00:11:42.609 }, 00:11:42.609 "auth": { 00:11:42.609 "state": "completed", 00:11:42.609 "digest": "sha384", 00:11:42.609 "dhgroup": "ffdhe3072" 00:11:42.609 } 00:11:42.609 } 00:11:42.609 ]' 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.609 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.867 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:43.803 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:44.371 00:11:44.371 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:44.371 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.371 19:50:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:44.630 { 00:11:44.630 "cntlid": 71, 00:11:44.630 "qid": 0, 00:11:44.630 "state": "enabled", 00:11:44.630 "thread": "nvmf_tgt_poll_group_000", 00:11:44.630 "listen_address": { 00:11:44.630 "trtype": "TCP", 00:11:44.630 "adrfam": "IPv4", 00:11:44.630 "traddr": "10.0.0.2", 00:11:44.630 "trsvcid": "4420" 00:11:44.630 }, 00:11:44.630 "peer_address": { 00:11:44.630 "trtype": "TCP", 00:11:44.630 "adrfam": "IPv4", 00:11:44.630 "traddr": "10.0.0.1", 00:11:44.630 "trsvcid": "39596" 00:11:44.630 }, 00:11:44.630 "auth": { 00:11:44.630 "state": "completed", 00:11:44.630 "digest": "sha384", 00:11:44.630 "dhgroup": "ffdhe3072" 00:11:44.630 } 00:11:44.630 } 00:11:44.630 ]' 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:44.630 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:44.888 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.888 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.888 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.888 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.822 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.395 00:11:46.395 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:46.395 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:46.395 19:50:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.395 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.395 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.395 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.395 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.395 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.395 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:46.395 { 00:11:46.395 "cntlid": 73, 00:11:46.395 "qid": 0, 00:11:46.395 "state": "enabled", 00:11:46.395 "thread": "nvmf_tgt_poll_group_000", 00:11:46.395 "listen_address": { 00:11:46.395 "trtype": "TCP", 00:11:46.395 "adrfam": "IPv4", 00:11:46.395 "traddr": "10.0.0.2", 00:11:46.395 "trsvcid": "4420" 00:11:46.395 }, 00:11:46.395 "peer_address": { 00:11:46.395 "trtype": "TCP", 00:11:46.395 "adrfam": "IPv4", 00:11:46.395 "traddr": "10.0.0.1", 00:11:46.395 "trsvcid": "39622" 00:11:46.395 }, 00:11:46.395 "auth": { 00:11:46.395 "state": "completed", 00:11:46.395 "digest": "sha384", 00:11:46.395 "dhgroup": "ffdhe4096" 00:11:46.395 } 00:11:46.395 } 00:11:46.395 ]' 00:11:46.395 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:46.653 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:46.653 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:46.653 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:46.653 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:46.653 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.654 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.654 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.912 19:50:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:11:47.479 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.479 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:47.479 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.479 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.479 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.479 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.479 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:47.479 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.737 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.303 00:11:48.303 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:48.303 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:48.303 19:50:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.559 { 00:11:48.559 "cntlid": 75, 00:11:48.559 "qid": 0, 00:11:48.559 "state": "enabled", 00:11:48.559 "thread": "nvmf_tgt_poll_group_000", 00:11:48.559 "listen_address": { 00:11:48.559 "trtype": "TCP", 00:11:48.559 "adrfam": "IPv4", 00:11:48.559 "traddr": "10.0.0.2", 00:11:48.559 "trsvcid": "4420" 00:11:48.559 }, 00:11:48.559 "peer_address": { 00:11:48.559 "trtype": "TCP", 00:11:48.559 "adrfam": "IPv4", 00:11:48.559 "traddr": "10.0.0.1", 00:11:48.559 "trsvcid": "39648" 00:11:48.559 }, 00:11:48.559 "auth": { 00:11:48.559 "state": "completed", 00:11:48.559 "digest": "sha384", 00:11:48.559 "dhgroup": "ffdhe4096" 00:11:48.559 } 00:11:48.559 } 00:11:48.559 ]' 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.559 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.817 19:50:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:11:49.750 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.750 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:49.750 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.750 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.750 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.750 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.750 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:49.750 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.009 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.267 00:11:50.267 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.267 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.267 19:50:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.525 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.525 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.525 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.525 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.525 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.525 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.525 { 00:11:50.525 "cntlid": 77, 00:11:50.525 "qid": 0, 00:11:50.525 "state": "enabled", 00:11:50.525 "thread": "nvmf_tgt_poll_group_000", 00:11:50.525 "listen_address": { 00:11:50.525 "trtype": "TCP", 00:11:50.525 "adrfam": "IPv4", 00:11:50.525 "traddr": "10.0.0.2", 00:11:50.525 "trsvcid": "4420" 00:11:50.525 }, 00:11:50.525 "peer_address": { 00:11:50.525 "trtype": "TCP", 00:11:50.525 "adrfam": "IPv4", 00:11:50.525 "traddr": "10.0.0.1", 00:11:50.525 "trsvcid": "51240" 00:11:50.525 }, 00:11:50.525 "auth": { 00:11:50.525 "state": "completed", 00:11:50.525 "digest": "sha384", 00:11:50.525 "dhgroup": "ffdhe4096" 00:11:50.525 } 00:11:50.525 } 00:11:50.525 ]' 00:11:50.525 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.783 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.783 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.783 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:50.784 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.784 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.784 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.784 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.042 19:50:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:11:51.977 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.977 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:51.977 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.977 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.977 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.977 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.977 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:51.977 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.236 19:50:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.494 00:11:52.494 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.494 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.494 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.752 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.752 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.752 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.752 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.752 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.752 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.752 { 00:11:52.752 "cntlid": 79, 00:11:52.752 "qid": 0, 00:11:52.752 "state": "enabled", 00:11:52.752 "thread": "nvmf_tgt_poll_group_000", 00:11:52.752 "listen_address": { 00:11:52.752 "trtype": "TCP", 00:11:52.752 "adrfam": "IPv4", 00:11:52.752 "traddr": "10.0.0.2", 00:11:52.752 "trsvcid": "4420" 00:11:52.752 }, 00:11:52.752 "peer_address": { 00:11:52.752 "trtype": "TCP", 00:11:52.752 "adrfam": "IPv4", 00:11:52.752 "traddr": "10.0.0.1", 00:11:52.752 "trsvcid": "51276" 00:11:52.752 }, 00:11:52.752 "auth": { 00:11:52.752 "state": "completed", 00:11:52.752 "digest": "sha384", 00:11:52.752 "dhgroup": "ffdhe4096" 00:11:52.752 } 00:11:52.752 } 00:11:52.752 ]' 00:11:52.753 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.011 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.011 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.011 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:53.011 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.011 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.011 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.011 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.269 19:50:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:54.203 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.462 19:50:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.721 00:11:54.721 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.721 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.721 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.979 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.979 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.979 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.979 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.979 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.979 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.979 { 00:11:54.979 "cntlid": 81, 00:11:54.979 "qid": 0, 00:11:54.979 "state": "enabled", 00:11:54.979 "thread": "nvmf_tgt_poll_group_000", 00:11:54.979 "listen_address": { 00:11:54.979 "trtype": "TCP", 00:11:54.979 "adrfam": "IPv4", 00:11:54.979 "traddr": "10.0.0.2", 00:11:54.979 "trsvcid": "4420" 00:11:54.979 }, 00:11:54.979 "peer_address": { 00:11:54.979 "trtype": "TCP", 00:11:54.979 "adrfam": "IPv4", 00:11:54.979 "traddr": "10.0.0.1", 00:11:54.979 "trsvcid": "51302" 00:11:54.979 }, 00:11:54.979 "auth": { 00:11:54.979 "state": "completed", 00:11:54.979 "digest": "sha384", 00:11:54.979 "dhgroup": "ffdhe6144" 00:11:54.979 } 00:11:54.979 } 00:11:54.979 ]' 00:11:54.979 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.237 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.237 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.237 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:55.237 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.237 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.237 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.237 19:50:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.496 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:11:56.101 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.101 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:56.101 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.101 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.101 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.101 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.101 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:56.101 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.392 19:50:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.958 00:11:56.958 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.958 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.958 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.216 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.216 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.216 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.216 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.216 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.217 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.217 { 00:11:57.217 "cntlid": 83, 00:11:57.217 "qid": 0, 00:11:57.217 "state": "enabled", 00:11:57.217 "thread": "nvmf_tgt_poll_group_000", 00:11:57.217 "listen_address": { 00:11:57.217 "trtype": "TCP", 00:11:57.217 "adrfam": "IPv4", 00:11:57.217 "traddr": "10.0.0.2", 00:11:57.217 "trsvcid": "4420" 00:11:57.217 }, 00:11:57.217 "peer_address": { 00:11:57.217 "trtype": "TCP", 00:11:57.217 "adrfam": "IPv4", 00:11:57.217 "traddr": "10.0.0.1", 00:11:57.217 "trsvcid": "51320" 00:11:57.217 }, 00:11:57.217 "auth": { 00:11:57.217 "state": "completed", 00:11:57.217 "digest": "sha384", 00:11:57.217 "dhgroup": "ffdhe6144" 00:11:57.217 } 00:11:57.217 } 00:11:57.217 ]' 00:11:57.217 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.217 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.217 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.217 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:57.217 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.475 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.475 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.475 19:50:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.733 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:11:58.300 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.300 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:11:58.300 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.300 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.300 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.300 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.301 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:58.301 19:50:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:58.559 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:58.559 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.559 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.559 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:58.559 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:58.559 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.559 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.560 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.560 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.560 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.560 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.560 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.126 00:11:59.126 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.126 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.126 19:50:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.384 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.384 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.385 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.385 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.385 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.385 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.385 { 00:11:59.385 "cntlid": 85, 00:11:59.385 "qid": 0, 00:11:59.385 "state": "enabled", 00:11:59.385 "thread": "nvmf_tgt_poll_group_000", 00:11:59.385 "listen_address": { 00:11:59.385 "trtype": "TCP", 00:11:59.385 "adrfam": "IPv4", 00:11:59.385 "traddr": "10.0.0.2", 00:11:59.385 "trsvcid": "4420" 00:11:59.385 }, 00:11:59.385 "peer_address": { 00:11:59.385 "trtype": "TCP", 00:11:59.385 "adrfam": "IPv4", 00:11:59.385 "traddr": "10.0.0.1", 00:11:59.385 "trsvcid": "51344" 00:11:59.385 }, 00:11:59.385 "auth": { 00:11:59.385 "state": "completed", 00:11:59.385 "digest": "sha384", 00:11:59.385 "dhgroup": "ffdhe6144" 00:11:59.385 } 00:11:59.385 } 00:11:59.385 ]' 00:11:59.385 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.647 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.647 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.647 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:59.647 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.647 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.647 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.647 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.911 19:50:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:00.847 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:01.107 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.107 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:12:01.107 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.107 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.107 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.107 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.107 19:50:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.366 00:12:01.366 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.366 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.366 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.933 { 00:12:01.933 "cntlid": 87, 00:12:01.933 "qid": 0, 00:12:01.933 "state": "enabled", 00:12:01.933 "thread": "nvmf_tgt_poll_group_000", 00:12:01.933 "listen_address": { 00:12:01.933 "trtype": "TCP", 00:12:01.933 "adrfam": "IPv4", 00:12:01.933 "traddr": "10.0.0.2", 00:12:01.933 "trsvcid": "4420" 00:12:01.933 }, 00:12:01.933 "peer_address": { 00:12:01.933 "trtype": "TCP", 00:12:01.933 "adrfam": "IPv4", 00:12:01.933 "traddr": "10.0.0.1", 00:12:01.933 "trsvcid": "36984" 00:12:01.933 }, 00:12:01.933 "auth": { 00:12:01.933 "state": "completed", 00:12:01.933 "digest": "sha384", 00:12:01.933 "dhgroup": "ffdhe6144" 00:12:01.933 } 00:12:01.933 } 00:12:01.933 ]' 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.933 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.192 19:50:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.127 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.128 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.128 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.128 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.128 19:50:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.063 00:12:04.063 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.063 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.063 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.322 { 00:12:04.322 "cntlid": 89, 00:12:04.322 "qid": 0, 00:12:04.322 "state": "enabled", 00:12:04.322 "thread": "nvmf_tgt_poll_group_000", 00:12:04.322 "listen_address": { 00:12:04.322 "trtype": "TCP", 00:12:04.322 "adrfam": "IPv4", 00:12:04.322 "traddr": "10.0.0.2", 00:12:04.322 "trsvcid": "4420" 00:12:04.322 }, 00:12:04.322 "peer_address": { 00:12:04.322 "trtype": "TCP", 00:12:04.322 "adrfam": "IPv4", 00:12:04.322 "traddr": "10.0.0.1", 00:12:04.322 "trsvcid": "37018" 00:12:04.322 }, 00:12:04.322 "auth": { 00:12:04.322 "state": "completed", 00:12:04.322 "digest": "sha384", 00:12:04.322 "dhgroup": "ffdhe8192" 00:12:04.322 } 00:12:04.322 } 00:12:04.322 ]' 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.322 19:50:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.887 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:12:05.453 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.453 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:05.453 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.453 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.453 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.453 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.453 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:05.453 19:50:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.711 19:50:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.645 00:12:06.645 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.645 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.645 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.904 { 00:12:06.904 "cntlid": 91, 00:12:06.904 "qid": 0, 00:12:06.904 "state": "enabled", 00:12:06.904 "thread": "nvmf_tgt_poll_group_000", 00:12:06.904 "listen_address": { 00:12:06.904 "trtype": "TCP", 00:12:06.904 "adrfam": "IPv4", 00:12:06.904 "traddr": "10.0.0.2", 00:12:06.904 "trsvcid": "4420" 00:12:06.904 }, 00:12:06.904 "peer_address": { 00:12:06.904 "trtype": "TCP", 00:12:06.904 "adrfam": "IPv4", 00:12:06.904 "traddr": "10.0.0.1", 00:12:06.904 "trsvcid": "37044" 00:12:06.904 }, 00:12:06.904 "auth": { 00:12:06.904 "state": "completed", 00:12:06.904 "digest": "sha384", 00:12:06.904 "dhgroup": "ffdhe8192" 00:12:06.904 } 00:12:06.904 } 00:12:06.904 ]' 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.904 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.163 19:50:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:12:08.099 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.099 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:08.099 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.099 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.099 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.099 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:08.099 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:08.099 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.358 19:50:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.926 00:12:08.926 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.926 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.926 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:09.185 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.185 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.185 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.185 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.444 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.444 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:09.444 { 00:12:09.444 "cntlid": 93, 00:12:09.444 "qid": 0, 00:12:09.444 "state": "enabled", 00:12:09.444 "thread": "nvmf_tgt_poll_group_000", 00:12:09.444 "listen_address": { 00:12:09.444 "trtype": "TCP", 00:12:09.444 "adrfam": "IPv4", 00:12:09.444 "traddr": "10.0.0.2", 00:12:09.444 "trsvcid": "4420" 00:12:09.444 }, 00:12:09.444 "peer_address": { 00:12:09.444 "trtype": "TCP", 00:12:09.444 "adrfam": "IPv4", 00:12:09.444 "traddr": "10.0.0.1", 00:12:09.444 "trsvcid": "37072" 00:12:09.444 }, 00:12:09.444 "auth": { 00:12:09.444 "state": "completed", 00:12:09.444 "digest": "sha384", 00:12:09.444 "dhgroup": "ffdhe8192" 00:12:09.444 } 00:12:09.444 } 00:12:09.444 ]' 00:12:09.444 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:09.444 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.444 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:09.444 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:09.444 19:50:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:09.444 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.444 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.444 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.703 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:12:10.270 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.270 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:10.270 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.270 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.270 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.270 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:10.270 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:10.270 19:50:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.837 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:11.404 00:12:11.405 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:11.405 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.405 19:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:11.663 { 00:12:11.663 "cntlid": 95, 00:12:11.663 "qid": 0, 00:12:11.663 "state": "enabled", 00:12:11.663 "thread": "nvmf_tgt_poll_group_000", 00:12:11.663 "listen_address": { 00:12:11.663 "trtype": "TCP", 00:12:11.663 "adrfam": "IPv4", 00:12:11.663 "traddr": "10.0.0.2", 00:12:11.663 "trsvcid": "4420" 00:12:11.663 }, 00:12:11.663 "peer_address": { 00:12:11.663 "trtype": "TCP", 00:12:11.663 "adrfam": "IPv4", 00:12:11.663 "traddr": "10.0.0.1", 00:12:11.663 "trsvcid": "57560" 00:12:11.663 }, 00:12:11.663 "auth": { 00:12:11.663 "state": "completed", 00:12:11.663 "digest": "sha384", 00:12:11.663 "dhgroup": "ffdhe8192" 00:12:11.663 } 00:12:11.663 } 00:12:11.663 ]' 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:11.663 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.922 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.922 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.922 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.181 19:50:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:12.746 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.004 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.262 00:12:13.262 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:13.262 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.262 19:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.520 { 00:12:13.520 "cntlid": 97, 00:12:13.520 "qid": 0, 00:12:13.520 "state": "enabled", 00:12:13.520 "thread": "nvmf_tgt_poll_group_000", 00:12:13.520 "listen_address": { 00:12:13.520 "trtype": "TCP", 00:12:13.520 "adrfam": "IPv4", 00:12:13.520 "traddr": "10.0.0.2", 00:12:13.520 "trsvcid": "4420" 00:12:13.520 }, 00:12:13.520 "peer_address": { 00:12:13.520 "trtype": "TCP", 00:12:13.520 "adrfam": "IPv4", 00:12:13.520 "traddr": "10.0.0.1", 00:12:13.520 "trsvcid": "57578" 00:12:13.520 }, 00:12:13.520 "auth": { 00:12:13.520 "state": "completed", 00:12:13.520 "digest": "sha512", 00:12:13.520 "dhgroup": "null" 00:12:13.520 } 00:12:13.520 } 00:12:13.520 ]' 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:13.520 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.778 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:13.778 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.778 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.778 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.778 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.035 19:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:12:14.600 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.600 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:14.600 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.600 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.600 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.600 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.600 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:14.600 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.858 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.116 00:12:15.116 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.116 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.116 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.374 { 00:12:15.374 "cntlid": 99, 00:12:15.374 "qid": 0, 00:12:15.374 "state": "enabled", 00:12:15.374 "thread": "nvmf_tgt_poll_group_000", 00:12:15.374 "listen_address": { 00:12:15.374 "trtype": "TCP", 00:12:15.374 "adrfam": "IPv4", 00:12:15.374 "traddr": "10.0.0.2", 00:12:15.374 "trsvcid": "4420" 00:12:15.374 }, 00:12:15.374 "peer_address": { 00:12:15.374 "trtype": "TCP", 00:12:15.374 "adrfam": "IPv4", 00:12:15.374 "traddr": "10.0.0.1", 00:12:15.374 "trsvcid": "57614" 00:12:15.374 }, 00:12:15.374 "auth": { 00:12:15.374 "state": "completed", 00:12:15.374 "digest": "sha512", 00:12:15.374 "dhgroup": "null" 00:12:15.374 } 00:12:15.374 } 00:12:15.374 ]' 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:15.374 19:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.374 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:15.374 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.631 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.631 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.631 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.889 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:12:16.505 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.505 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:16.505 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.505 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.505 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.505 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.505 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:16.505 19:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.766 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.024 00:12:17.024 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.024 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.024 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.283 { 00:12:17.283 "cntlid": 101, 00:12:17.283 "qid": 0, 00:12:17.283 "state": "enabled", 00:12:17.283 "thread": "nvmf_tgt_poll_group_000", 00:12:17.283 "listen_address": { 00:12:17.283 "trtype": "TCP", 00:12:17.283 "adrfam": "IPv4", 00:12:17.283 "traddr": "10.0.0.2", 00:12:17.283 "trsvcid": "4420" 00:12:17.283 }, 00:12:17.283 "peer_address": { 00:12:17.283 "trtype": "TCP", 00:12:17.283 "adrfam": "IPv4", 00:12:17.283 "traddr": "10.0.0.1", 00:12:17.283 "trsvcid": "57630" 00:12:17.283 }, 00:12:17.283 "auth": { 00:12:17.283 "state": "completed", 00:12:17.283 "digest": "sha512", 00:12:17.283 "dhgroup": "null" 00:12:17.283 } 00:12:17.283 } 00:12:17.283 ]' 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.283 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.541 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:17.541 19:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.541 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.541 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.541 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.801 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:12:18.366 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.366 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:18.366 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.366 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.366 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.366 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.366 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:18.366 19:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.626 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:19.192 00:12:19.192 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.192 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.192 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.497 { 00:12:19.497 "cntlid": 103, 00:12:19.497 "qid": 0, 00:12:19.497 "state": "enabled", 00:12:19.497 "thread": "nvmf_tgt_poll_group_000", 00:12:19.497 "listen_address": { 00:12:19.497 "trtype": "TCP", 00:12:19.497 "adrfam": "IPv4", 00:12:19.497 "traddr": "10.0.0.2", 00:12:19.497 "trsvcid": "4420" 00:12:19.497 }, 00:12:19.497 "peer_address": { 00:12:19.497 "trtype": "TCP", 00:12:19.497 "adrfam": "IPv4", 00:12:19.497 "traddr": "10.0.0.1", 00:12:19.497 "trsvcid": "57652" 00:12:19.497 }, 00:12:19.497 "auth": { 00:12:19.497 "state": "completed", 00:12:19.497 "digest": "sha512", 00:12:19.497 "dhgroup": "null" 00:12:19.497 } 00:12:19.497 } 00:12:19.497 ]' 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:19.497 19:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.497 19:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.497 19:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.497 19:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.776 19:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:12:20.711 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.712 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:20.712 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.712 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.712 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.712 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.712 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.712 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:20.712 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.970 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.229 00:12:21.229 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.229 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.229 19:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.488 { 00:12:21.488 "cntlid": 105, 00:12:21.488 "qid": 0, 00:12:21.488 "state": "enabled", 00:12:21.488 "thread": "nvmf_tgt_poll_group_000", 00:12:21.488 "listen_address": { 00:12:21.488 "trtype": "TCP", 00:12:21.488 "adrfam": "IPv4", 00:12:21.488 "traddr": "10.0.0.2", 00:12:21.488 "trsvcid": "4420" 00:12:21.488 }, 00:12:21.488 "peer_address": { 00:12:21.488 "trtype": "TCP", 00:12:21.488 "adrfam": "IPv4", 00:12:21.488 "traddr": "10.0.0.1", 00:12:21.488 "trsvcid": "46454" 00:12:21.488 }, 00:12:21.488 "auth": { 00:12:21.488 "state": "completed", 00:12:21.488 "digest": "sha512", 00:12:21.488 "dhgroup": "ffdhe2048" 00:12:21.488 } 00:12:21.488 } 00:12:21.488 ]' 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:21.488 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.747 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.747 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.747 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.006 19:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:12:22.581 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.582 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:22.582 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.582 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.582 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.582 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.582 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:22.582 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.845 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.468 00:12:23.468 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.468 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.468 19:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.468 { 00:12:23.468 "cntlid": 107, 00:12:23.468 "qid": 0, 00:12:23.468 "state": "enabled", 00:12:23.468 "thread": "nvmf_tgt_poll_group_000", 00:12:23.468 "listen_address": { 00:12:23.468 "trtype": "TCP", 00:12:23.468 "adrfam": "IPv4", 00:12:23.468 "traddr": "10.0.0.2", 00:12:23.468 "trsvcid": "4420" 00:12:23.468 }, 00:12:23.468 "peer_address": { 00:12:23.468 "trtype": "TCP", 00:12:23.468 "adrfam": "IPv4", 00:12:23.468 "traddr": "10.0.0.1", 00:12:23.468 "trsvcid": "46478" 00:12:23.468 }, 00:12:23.468 "auth": { 00:12:23.468 "state": "completed", 00:12:23.468 "digest": "sha512", 00:12:23.468 "dhgroup": "ffdhe2048" 00:12:23.468 } 00:12:23.468 } 00:12:23.468 ]' 00:12:23.468 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.726 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.726 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.726 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:23.726 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.726 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.726 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.726 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.985 19:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.920 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.177 00:12:25.438 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.438 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.438 19:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.697 { 00:12:25.697 "cntlid": 109, 00:12:25.697 "qid": 0, 00:12:25.697 "state": "enabled", 00:12:25.697 "thread": "nvmf_tgt_poll_group_000", 00:12:25.697 "listen_address": { 00:12:25.697 "trtype": "TCP", 00:12:25.697 "adrfam": "IPv4", 00:12:25.697 "traddr": "10.0.0.2", 00:12:25.697 "trsvcid": "4420" 00:12:25.697 }, 00:12:25.697 "peer_address": { 00:12:25.697 "trtype": "TCP", 00:12:25.697 "adrfam": "IPv4", 00:12:25.697 "traddr": "10.0.0.1", 00:12:25.697 "trsvcid": "46504" 00:12:25.697 }, 00:12:25.697 "auth": { 00:12:25.697 "state": "completed", 00:12:25.697 "digest": "sha512", 00:12:25.697 "dhgroup": "ffdhe2048" 00:12:25.697 } 00:12:25.697 } 00:12:25.697 ]' 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.697 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.956 19:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:12:26.891 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.891 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:26.891 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.891 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.891 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.891 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.891 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:26.891 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.892 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:27.458 00:12:27.458 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.458 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.458 19:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.458 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.458 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.458 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.458 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.458 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.458 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.458 { 00:12:27.458 "cntlid": 111, 00:12:27.458 "qid": 0, 00:12:27.458 "state": "enabled", 00:12:27.458 "thread": "nvmf_tgt_poll_group_000", 00:12:27.458 "listen_address": { 00:12:27.458 "trtype": "TCP", 00:12:27.458 "adrfam": "IPv4", 00:12:27.458 "traddr": "10.0.0.2", 00:12:27.458 "trsvcid": "4420" 00:12:27.458 }, 00:12:27.458 "peer_address": { 00:12:27.458 "trtype": "TCP", 00:12:27.458 "adrfam": "IPv4", 00:12:27.458 "traddr": "10.0.0.1", 00:12:27.458 "trsvcid": "46524" 00:12:27.458 }, 00:12:27.458 "auth": { 00:12:27.458 "state": "completed", 00:12:27.458 "digest": "sha512", 00:12:27.458 "dhgroup": "ffdhe2048" 00:12:27.458 } 00:12:27.458 } 00:12:27.458 ]' 00:12:27.458 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.722 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.722 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.722 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:27.722 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.722 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.722 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.722 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.997 19:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:28.564 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.822 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:29.389 00:12:29.389 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.389 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.389 19:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.389 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.389 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.389 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.389 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.389 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.389 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.389 { 00:12:29.389 "cntlid": 113, 00:12:29.389 "qid": 0, 00:12:29.389 "state": "enabled", 00:12:29.389 "thread": "nvmf_tgt_poll_group_000", 00:12:29.389 "listen_address": { 00:12:29.389 "trtype": "TCP", 00:12:29.389 "adrfam": "IPv4", 00:12:29.389 "traddr": "10.0.0.2", 00:12:29.389 "trsvcid": "4420" 00:12:29.389 }, 00:12:29.389 "peer_address": { 00:12:29.389 "trtype": "TCP", 00:12:29.389 "adrfam": "IPv4", 00:12:29.389 "traddr": "10.0.0.1", 00:12:29.389 "trsvcid": "46552" 00:12:29.389 }, 00:12:29.389 "auth": { 00:12:29.389 "state": "completed", 00:12:29.389 "digest": "sha512", 00:12:29.389 "dhgroup": "ffdhe3072" 00:12:29.389 } 00:12:29.389 } 00:12:29.389 ]' 00:12:29.389 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.648 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.648 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.648 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:29.648 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.648 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.648 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.648 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.907 19:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:12:30.474 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.474 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:30.474 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.474 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.474 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.474 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.474 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:30.475 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.734 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.301 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.301 { 00:12:31.301 "cntlid": 115, 00:12:31.301 "qid": 0, 00:12:31.301 "state": "enabled", 00:12:31.301 "thread": "nvmf_tgt_poll_group_000", 00:12:31.301 "listen_address": { 00:12:31.301 "trtype": "TCP", 00:12:31.301 "adrfam": "IPv4", 00:12:31.301 "traddr": "10.0.0.2", 00:12:31.301 "trsvcid": "4420" 00:12:31.301 }, 00:12:31.301 "peer_address": { 00:12:31.301 "trtype": "TCP", 00:12:31.301 "adrfam": "IPv4", 00:12:31.301 "traddr": "10.0.0.1", 00:12:31.301 "trsvcid": "46042" 00:12:31.301 }, 00:12:31.301 "auth": { 00:12:31.301 "state": "completed", 00:12:31.301 "digest": "sha512", 00:12:31.301 "dhgroup": "ffdhe3072" 00:12:31.301 } 00:12:31.301 } 00:12:31.301 ]' 00:12:31.301 19:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.559 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.559 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.559 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:31.559 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.559 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.559 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.559 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.818 19:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:32.753 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:32.754 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:32.754 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.754 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.754 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.754 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.754 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.754 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.754 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.330 00:12:33.330 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.330 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.330 19:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.620 { 00:12:33.620 "cntlid": 117, 00:12:33.620 "qid": 0, 00:12:33.620 "state": "enabled", 00:12:33.620 "thread": "nvmf_tgt_poll_group_000", 00:12:33.620 "listen_address": { 00:12:33.620 "trtype": "TCP", 00:12:33.620 "adrfam": "IPv4", 00:12:33.620 "traddr": "10.0.0.2", 00:12:33.620 "trsvcid": "4420" 00:12:33.620 }, 00:12:33.620 "peer_address": { 00:12:33.620 "trtype": "TCP", 00:12:33.620 "adrfam": "IPv4", 00:12:33.620 "traddr": "10.0.0.1", 00:12:33.620 "trsvcid": "46072" 00:12:33.620 }, 00:12:33.620 "auth": { 00:12:33.620 "state": "completed", 00:12:33.620 "digest": "sha512", 00:12:33.620 "dhgroup": "ffdhe3072" 00:12:33.620 } 00:12:33.620 } 00:12:33.620 ]' 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.620 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.878 19:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.836 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:35.419 00:12:35.419 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.419 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.419 19:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.678 { 00:12:35.678 "cntlid": 119, 00:12:35.678 "qid": 0, 00:12:35.678 "state": "enabled", 00:12:35.678 "thread": "nvmf_tgt_poll_group_000", 00:12:35.678 "listen_address": { 00:12:35.678 "trtype": "TCP", 00:12:35.678 "adrfam": "IPv4", 00:12:35.678 "traddr": "10.0.0.2", 00:12:35.678 "trsvcid": "4420" 00:12:35.678 }, 00:12:35.678 "peer_address": { 00:12:35.678 "trtype": "TCP", 00:12:35.678 "adrfam": "IPv4", 00:12:35.678 "traddr": "10.0.0.1", 00:12:35.678 "trsvcid": "46098" 00:12:35.678 }, 00:12:35.678 "auth": { 00:12:35.678 "state": "completed", 00:12:35.678 "digest": "sha512", 00:12:35.678 "dhgroup": "ffdhe3072" 00:12:35.678 } 00:12:35.678 } 00:12:35.678 ]' 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.678 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.274 19:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:36.844 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.103 19:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.362 00:12:37.362 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.362 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.362 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.928 { 00:12:37.928 "cntlid": 121, 00:12:37.928 "qid": 0, 00:12:37.928 "state": "enabled", 00:12:37.928 "thread": "nvmf_tgt_poll_group_000", 00:12:37.928 "listen_address": { 00:12:37.928 "trtype": "TCP", 00:12:37.928 "adrfam": "IPv4", 00:12:37.928 "traddr": "10.0.0.2", 00:12:37.928 "trsvcid": "4420" 00:12:37.928 }, 00:12:37.928 "peer_address": { 00:12:37.928 "trtype": "TCP", 00:12:37.928 "adrfam": "IPv4", 00:12:37.928 "traddr": "10.0.0.1", 00:12:37.928 "trsvcid": "46124" 00:12:37.928 }, 00:12:37.928 "auth": { 00:12:37.928 "state": "completed", 00:12:37.928 "digest": "sha512", 00:12:37.928 "dhgroup": "ffdhe4096" 00:12:37.928 } 00:12:37.928 } 00:12:37.928 ]' 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.928 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.188 19:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:12:38.756 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.756 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:38.756 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.756 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.756 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.756 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.756 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:38.756 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.325 19:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.588 00:12:39.588 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.588 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.588 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.846 { 00:12:39.846 "cntlid": 123, 00:12:39.846 "qid": 0, 00:12:39.846 "state": "enabled", 00:12:39.846 "thread": "nvmf_tgt_poll_group_000", 00:12:39.846 "listen_address": { 00:12:39.846 "trtype": "TCP", 00:12:39.846 "adrfam": "IPv4", 00:12:39.846 "traddr": "10.0.0.2", 00:12:39.846 "trsvcid": "4420" 00:12:39.846 }, 00:12:39.846 "peer_address": { 00:12:39.846 "trtype": "TCP", 00:12:39.846 "adrfam": "IPv4", 00:12:39.846 "traddr": "10.0.0.1", 00:12:39.846 "trsvcid": "46164" 00:12:39.846 }, 00:12:39.846 "auth": { 00:12:39.846 "state": "completed", 00:12:39.846 "digest": "sha512", 00:12:39.846 "dhgroup": "ffdhe4096" 00:12:39.846 } 00:12:39.846 } 00:12:39.846 ]' 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:39.846 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:40.105 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.105 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.105 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.364 19:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:12:40.931 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.931 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:40.931 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.931 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.931 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.931 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.931 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:40.931 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.189 19:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.757 00:12:41.757 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.757 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.757 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:42.017 { 00:12:42.017 "cntlid": 125, 00:12:42.017 "qid": 0, 00:12:42.017 "state": "enabled", 00:12:42.017 "thread": "nvmf_tgt_poll_group_000", 00:12:42.017 "listen_address": { 00:12:42.017 "trtype": "TCP", 00:12:42.017 "adrfam": "IPv4", 00:12:42.017 "traddr": "10.0.0.2", 00:12:42.017 "trsvcid": "4420" 00:12:42.017 }, 00:12:42.017 "peer_address": { 00:12:42.017 "trtype": "TCP", 00:12:42.017 "adrfam": "IPv4", 00:12:42.017 "traddr": "10.0.0.1", 00:12:42.017 "trsvcid": "45672" 00:12:42.017 }, 00:12:42.017 "auth": { 00:12:42.017 "state": "completed", 00:12:42.017 "digest": "sha512", 00:12:42.017 "dhgroup": "ffdhe4096" 00:12:42.017 } 00:12:42.017 } 00:12:42.017 ]' 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.017 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.275 19:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:12:42.861 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.861 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:42.861 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.861 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.861 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.861 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.861 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:42.861 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.120 19:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.687 00:12:43.687 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.687 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.687 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.946 { 00:12:43.946 "cntlid": 127, 00:12:43.946 "qid": 0, 00:12:43.946 "state": "enabled", 00:12:43.946 "thread": "nvmf_tgt_poll_group_000", 00:12:43.946 "listen_address": { 00:12:43.946 "trtype": "TCP", 00:12:43.946 "adrfam": "IPv4", 00:12:43.946 "traddr": "10.0.0.2", 00:12:43.946 "trsvcid": "4420" 00:12:43.946 }, 00:12:43.946 "peer_address": { 00:12:43.946 "trtype": "TCP", 00:12:43.946 "adrfam": "IPv4", 00:12:43.946 "traddr": "10.0.0.1", 00:12:43.946 "trsvcid": "45692" 00:12:43.946 }, 00:12:43.946 "auth": { 00:12:43.946 "state": "completed", 00:12:43.946 "digest": "sha512", 00:12:43.946 "dhgroup": "ffdhe4096" 00:12:43.946 } 00:12:43.946 } 00:12:43.946 ]' 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:43.946 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.204 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.204 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.204 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.463 19:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:45.027 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.593 19:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.851 00:12:45.851 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.851 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.851 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.109 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.109 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.109 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.109 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.109 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.109 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.109 { 00:12:46.109 "cntlid": 129, 00:12:46.109 "qid": 0, 00:12:46.109 "state": "enabled", 00:12:46.109 "thread": "nvmf_tgt_poll_group_000", 00:12:46.109 "listen_address": { 00:12:46.109 "trtype": "TCP", 00:12:46.109 "adrfam": "IPv4", 00:12:46.109 "traddr": "10.0.0.2", 00:12:46.109 "trsvcid": "4420" 00:12:46.109 }, 00:12:46.109 "peer_address": { 00:12:46.109 "trtype": "TCP", 00:12:46.109 "adrfam": "IPv4", 00:12:46.109 "traddr": "10.0.0.1", 00:12:46.109 "trsvcid": "45702" 00:12:46.109 }, 00:12:46.109 "auth": { 00:12:46.109 "state": "completed", 00:12:46.109 "digest": "sha512", 00:12:46.109 "dhgroup": "ffdhe6144" 00:12:46.109 } 00:12:46.109 } 00:12:46.109 ]' 00:12:46.109 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.367 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.367 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.367 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:46.367 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.367 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.367 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.367 19:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.625 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:12:47.558 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.558 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:47.558 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.558 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.558 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.558 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.558 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:47.558 19:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.816 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.381 00:12:48.381 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.381 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.381 19:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.639 { 00:12:48.639 "cntlid": 131, 00:12:48.639 "qid": 0, 00:12:48.639 "state": "enabled", 00:12:48.639 "thread": "nvmf_tgt_poll_group_000", 00:12:48.639 "listen_address": { 00:12:48.639 "trtype": "TCP", 00:12:48.639 "adrfam": "IPv4", 00:12:48.639 "traddr": "10.0.0.2", 00:12:48.639 "trsvcid": "4420" 00:12:48.639 }, 00:12:48.639 "peer_address": { 00:12:48.639 "trtype": "TCP", 00:12:48.639 "adrfam": "IPv4", 00:12:48.639 "traddr": "10.0.0.1", 00:12:48.639 "trsvcid": "45730" 00:12:48.639 }, 00:12:48.639 "auth": { 00:12:48.639 "state": "completed", 00:12:48.639 "digest": "sha512", 00:12:48.639 "dhgroup": "ffdhe6144" 00:12:48.639 } 00:12:48.639 } 00:12:48.639 ]' 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.639 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.897 19:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:12:49.881 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.881 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:49.881 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.881 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.881 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.881 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.881 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:49.881 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.139 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.140 19:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.706 00:12:50.706 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:50.706 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.706 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.966 { 00:12:50.966 "cntlid": 133, 00:12:50.966 "qid": 0, 00:12:50.966 "state": "enabled", 00:12:50.966 "thread": "nvmf_tgt_poll_group_000", 00:12:50.966 "listen_address": { 00:12:50.966 "trtype": "TCP", 00:12:50.966 "adrfam": "IPv4", 00:12:50.966 "traddr": "10.0.0.2", 00:12:50.966 "trsvcid": "4420" 00:12:50.966 }, 00:12:50.966 "peer_address": { 00:12:50.966 "trtype": "TCP", 00:12:50.966 "adrfam": "IPv4", 00:12:50.966 "traddr": "10.0.0.1", 00:12:50.966 "trsvcid": "39352" 00:12:50.966 }, 00:12:50.966 "auth": { 00:12:50.966 "state": "completed", 00:12:50.966 "digest": "sha512", 00:12:50.966 "dhgroup": "ffdhe6144" 00:12:50.966 } 00:12:50.966 } 00:12:50.966 ]' 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:50.966 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.225 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.225 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.225 19:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.483 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:12:52.049 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.049 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:52.049 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.049 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.049 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.049 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.049 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:52.049 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.615 19:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.615 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.615 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:52.615 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:52.875 00:12:52.875 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.875 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.875 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.134 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.134 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.134 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.134 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.134 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.134 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.134 { 00:12:53.134 "cntlid": 135, 00:12:53.134 "qid": 0, 00:12:53.134 "state": "enabled", 00:12:53.134 "thread": "nvmf_tgt_poll_group_000", 00:12:53.134 "listen_address": { 00:12:53.134 "trtype": "TCP", 00:12:53.134 "adrfam": "IPv4", 00:12:53.134 "traddr": "10.0.0.2", 00:12:53.134 "trsvcid": "4420" 00:12:53.134 }, 00:12:53.134 "peer_address": { 00:12:53.134 "trtype": "TCP", 00:12:53.134 "adrfam": "IPv4", 00:12:53.134 "traddr": "10.0.0.1", 00:12:53.134 "trsvcid": "39376" 00:12:53.134 }, 00:12:53.134 "auth": { 00:12:53.134 "state": "completed", 00:12:53.134 "digest": "sha512", 00:12:53.134 "dhgroup": "ffdhe6144" 00:12:53.134 } 00:12:53.134 } 00:12:53.134 ]' 00:12:53.134 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.393 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.393 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.393 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:53.393 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.393 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.393 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.393 19:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.651 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:54.587 19:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.846 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.414 00:12:55.414 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.414 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.414 19:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.673 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.673 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.673 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.673 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.673 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.673 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.673 { 00:12:55.673 "cntlid": 137, 00:12:55.673 "qid": 0, 00:12:55.673 "state": "enabled", 00:12:55.673 "thread": "nvmf_tgt_poll_group_000", 00:12:55.673 "listen_address": { 00:12:55.673 "trtype": "TCP", 00:12:55.673 "adrfam": "IPv4", 00:12:55.673 "traddr": "10.0.0.2", 00:12:55.673 "trsvcid": "4420" 00:12:55.673 }, 00:12:55.673 "peer_address": { 00:12:55.673 "trtype": "TCP", 00:12:55.673 "adrfam": "IPv4", 00:12:55.673 "traddr": "10.0.0.1", 00:12:55.673 "trsvcid": "39400" 00:12:55.673 }, 00:12:55.673 "auth": { 00:12:55.673 "state": "completed", 00:12:55.673 "digest": "sha512", 00:12:55.673 "dhgroup": "ffdhe8192" 00:12:55.673 } 00:12:55.673 } 00:12:55.673 ]' 00:12:55.673 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.932 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.932 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.932 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.932 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.932 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.932 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.932 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.191 19:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:12:57.127 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.127 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:57.127 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.127 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.127 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.127 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.127 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:57.127 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.386 19:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.953 00:12:57.953 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:57.953 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:57.953 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.211 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.211 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.211 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.211 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.211 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.211 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.211 { 00:12:58.211 "cntlid": 139, 00:12:58.211 "qid": 0, 00:12:58.211 "state": "enabled", 00:12:58.211 "thread": "nvmf_tgt_poll_group_000", 00:12:58.211 "listen_address": { 00:12:58.211 "trtype": "TCP", 00:12:58.211 "adrfam": "IPv4", 00:12:58.211 "traddr": "10.0.0.2", 00:12:58.211 "trsvcid": "4420" 00:12:58.211 }, 00:12:58.211 "peer_address": { 00:12:58.211 "trtype": "TCP", 00:12:58.211 "adrfam": "IPv4", 00:12:58.211 "traddr": "10.0.0.1", 00:12:58.211 "trsvcid": "39436" 00:12:58.212 }, 00:12:58.212 "auth": { 00:12:58.212 "state": "completed", 00:12:58.212 "digest": "sha512", 00:12:58.212 "dhgroup": "ffdhe8192" 00:12:58.212 } 00:12:58.212 } 00:12:58.212 ]' 00:12:58.212 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.470 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.470 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.470 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:58.470 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.470 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.470 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.470 19:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.728 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:01:YmY4OTA5ZDY2ZjMyZmEwZDBmZDE3OWE0Yzg3MGIyZmZ/3JeW: --dhchap-ctrl-secret DHHC-1:02:NjMzZThkZDQxZTdiZWZiMTdkYmRmNDA3NTAzODQxMDFiOTRhNWE0NjYyNDMyOGJm5DeHHg==: 00:12:59.296 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.296 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:12:59.296 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.296 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.296 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.296 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.296 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:59.296 19:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.555 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.491 00:13:00.491 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.491 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.491 19:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.750 { 00:13:00.750 "cntlid": 141, 00:13:00.750 "qid": 0, 00:13:00.750 "state": "enabled", 00:13:00.750 "thread": "nvmf_tgt_poll_group_000", 00:13:00.750 "listen_address": { 00:13:00.750 "trtype": "TCP", 00:13:00.750 "adrfam": "IPv4", 00:13:00.750 "traddr": "10.0.0.2", 00:13:00.750 "trsvcid": "4420" 00:13:00.750 }, 00:13:00.750 "peer_address": { 00:13:00.750 "trtype": "TCP", 00:13:00.750 "adrfam": "IPv4", 00:13:00.750 "traddr": "10.0.0.1", 00:13:00.750 "trsvcid": "57424" 00:13:00.750 }, 00:13:00.750 "auth": { 00:13:00.750 "state": "completed", 00:13:00.750 "digest": "sha512", 00:13:00.750 "dhgroup": "ffdhe8192" 00:13:00.750 } 00:13:00.750 } 00:13:00.750 ]' 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.750 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.009 19:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:02:YWU5Zjc3YTdjNjg0YmQ4YWU5YjY5NTkzODAwYzYwZTQ1ZWEyMTk5NjM4MGIyMjM3pJeUjw==: --dhchap-ctrl-secret DHHC-1:01:OTU3YmU0MGNlNjNmMzQxNmViYmY0OWU1ODkxOWIyZGYhPufI: 00:13:01.942 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.942 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:01.942 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.942 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.942 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.942 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:01.942 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:01.942 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.200 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.201 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.201 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.201 19:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:02.768 00:13:02.768 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.768 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:02.768 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.336 { 00:13:03.336 "cntlid": 143, 00:13:03.336 "qid": 0, 00:13:03.336 "state": "enabled", 00:13:03.336 "thread": "nvmf_tgt_poll_group_000", 00:13:03.336 "listen_address": { 00:13:03.336 "trtype": "TCP", 00:13:03.336 "adrfam": "IPv4", 00:13:03.336 "traddr": "10.0.0.2", 00:13:03.336 "trsvcid": "4420" 00:13:03.336 }, 00:13:03.336 "peer_address": { 00:13:03.336 "trtype": "TCP", 00:13:03.336 "adrfam": "IPv4", 00:13:03.336 "traddr": "10.0.0.1", 00:13:03.336 "trsvcid": "57468" 00:13:03.336 }, 00:13:03.336 "auth": { 00:13:03.336 "state": "completed", 00:13:03.336 "digest": "sha512", 00:13:03.336 "dhgroup": "ffdhe8192" 00:13:03.336 } 00:13:03.336 } 00:13:03.336 ]' 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.336 19:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:04.595 19:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.596 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.529 00:13:05.529 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.529 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.529 19:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.529 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.529 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.529 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.529 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.529 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.529 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.529 { 00:13:05.529 "cntlid": 145, 00:13:05.529 "qid": 0, 00:13:05.529 "state": "enabled", 00:13:05.529 "thread": "nvmf_tgt_poll_group_000", 00:13:05.529 "listen_address": { 00:13:05.529 "trtype": "TCP", 00:13:05.529 "adrfam": "IPv4", 00:13:05.529 "traddr": "10.0.0.2", 00:13:05.529 "trsvcid": "4420" 00:13:05.529 }, 00:13:05.529 "peer_address": { 00:13:05.529 "trtype": "TCP", 00:13:05.529 "adrfam": "IPv4", 00:13:05.529 "traddr": "10.0.0.1", 00:13:05.529 "trsvcid": "57498" 00:13:05.529 }, 00:13:05.529 "auth": { 00:13:05.529 "state": "completed", 00:13:05.529 "digest": "sha512", 00:13:05.529 "dhgroup": "ffdhe8192" 00:13:05.529 } 00:13:05.529 } 00:13:05.529 ]' 00:13:05.529 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.788 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.788 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.788 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:05.788 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.788 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.788 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.788 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.047 19:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:00:ZTdhMThiZWVjZTc4MDFhNDFjNjdhNGVmZmUxYjg2YmQwZWE0ZjEzNjhhMzU0YmRicEg4lA==: --dhchap-ctrl-secret DHHC-1:03:YWJkNTUxZjFjNzMwZWE0ZmM5MDdhM2IxMjBmZDFhMjdlM2EwNmUwMDlmZjFhNzBhNjgwNDRiYjhkNzkzYmZlOJBcYZg=: 00:13:06.614 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:06.873 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:07.440 request: 00:13:07.440 { 00:13:07.440 "name": "nvme0", 00:13:07.440 "trtype": "tcp", 00:13:07.440 "traddr": "10.0.0.2", 00:13:07.440 "adrfam": "ipv4", 00:13:07.440 "trsvcid": "4420", 00:13:07.440 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:07.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de", 00:13:07.440 "prchk_reftag": false, 00:13:07.440 "prchk_guard": false, 00:13:07.440 "hdgst": false, 00:13:07.440 "ddgst": false, 00:13:07.440 "dhchap_key": "key2", 00:13:07.440 "method": "bdev_nvme_attach_controller", 00:13:07.440 "req_id": 1 00:13:07.440 } 00:13:07.440 Got JSON-RPC error response 00:13:07.440 response: 00:13:07.440 { 00:13:07.440 "code": -5, 00:13:07.440 "message": "Input/output error" 00:13:07.440 } 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:07.440 19:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:08.064 request: 00:13:08.064 { 00:13:08.064 "name": "nvme0", 00:13:08.064 "trtype": "tcp", 00:13:08.064 "traddr": "10.0.0.2", 00:13:08.064 "adrfam": "ipv4", 00:13:08.064 "trsvcid": "4420", 00:13:08.064 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:08.064 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de", 00:13:08.064 "prchk_reftag": false, 00:13:08.064 "prchk_guard": false, 00:13:08.064 "hdgst": false, 00:13:08.064 "ddgst": false, 00:13:08.064 "dhchap_key": "key1", 00:13:08.064 "dhchap_ctrlr_key": "ckey2", 00:13:08.064 "method": "bdev_nvme_attach_controller", 00:13:08.064 "req_id": 1 00:13:08.064 } 00:13:08.064 Got JSON-RPC error response 00:13:08.064 response: 00:13:08.064 { 00:13:08.064 "code": -5, 00:13:08.064 "message": "Input/output error" 00:13:08.064 } 00:13:08.064 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key1 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.065 19:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.631 request: 00:13:08.631 { 00:13:08.631 "name": "nvme0", 00:13:08.631 "trtype": "tcp", 00:13:08.631 "traddr": "10.0.0.2", 00:13:08.631 "adrfam": "ipv4", 00:13:08.631 "trsvcid": "4420", 00:13:08.631 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:08.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de", 00:13:08.631 "prchk_reftag": false, 00:13:08.631 "prchk_guard": false, 00:13:08.631 "hdgst": false, 00:13:08.631 "ddgst": false, 00:13:08.631 "dhchap_key": "key1", 00:13:08.631 "dhchap_ctrlr_key": "ckey1", 00:13:08.631 "method": "bdev_nvme_attach_controller", 00:13:08.631 "req_id": 1 00:13:08.631 } 00:13:08.631 Got JSON-RPC error response 00:13:08.631 response: 00:13:08.631 { 00:13:08.631 "code": -5, 00:13:08.631 "message": "Input/output error" 00:13:08.631 } 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 68429 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68429 ']' 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68429 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.631 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68429 00:13:08.631 killing process with pid 68429 00:13:08.632 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.632 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.632 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68429' 00:13:08.632 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68429 00:13:08.632 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68429 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=71485 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 71485 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71485 ']' 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:08.891 19:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 71485 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 71485 ']' 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.825 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.082 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.082 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:10.082 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:10.082 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.082 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.340 19:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:10.906 00:13:10.906 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:10.906 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:10.906 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.164 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.164 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.164 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.164 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.164 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.164 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:11.164 { 00:13:11.164 "cntlid": 1, 00:13:11.164 "qid": 0, 00:13:11.164 "state": "enabled", 00:13:11.164 "thread": "nvmf_tgt_poll_group_000", 00:13:11.164 "listen_address": { 00:13:11.164 "trtype": "TCP", 00:13:11.164 "adrfam": "IPv4", 00:13:11.164 "traddr": "10.0.0.2", 00:13:11.164 "trsvcid": "4420" 00:13:11.164 }, 00:13:11.164 "peer_address": { 00:13:11.164 "trtype": "TCP", 00:13:11.164 "adrfam": "IPv4", 00:13:11.164 "traddr": "10.0.0.1", 00:13:11.164 "trsvcid": "39482" 00:13:11.164 }, 00:13:11.164 "auth": { 00:13:11.164 "state": "completed", 00:13:11.164 "digest": "sha512", 00:13:11.164 "dhgroup": "ffdhe8192" 00:13:11.164 } 00:13:11.164 } 00:13:11.164 ]' 00:13:11.164 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:11.165 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.165 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:11.422 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.422 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:11.422 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.422 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.422 19:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.680 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid 69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-secret DHHC-1:03:NzMyNjI4OGYwMWQ3YmYzMmU2N2RkZGZhYTdhMmJmY2U3NDU4YmJhNTYzYTFkOTRjZGY5MTBkYjA1NjIyNjE4OCpnTeA=: 00:13:12.247 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.247 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:12.247 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.247 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.505 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.505 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --dhchap-key key3 00:13:12.505 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.505 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.505 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.505 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:12.505 19:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.763 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.020 request: 00:13:13.020 { 00:13:13.020 "name": "nvme0", 00:13:13.020 "trtype": "tcp", 00:13:13.020 "traddr": "10.0.0.2", 00:13:13.020 "adrfam": "ipv4", 00:13:13.020 "trsvcid": "4420", 00:13:13.020 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:13.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de", 00:13:13.020 "prchk_reftag": false, 00:13:13.020 "prchk_guard": false, 00:13:13.020 "hdgst": false, 00:13:13.020 "ddgst": false, 00:13:13.020 "dhchap_key": "key3", 00:13:13.020 "method": "bdev_nvme_attach_controller", 00:13:13.020 "req_id": 1 00:13:13.020 } 00:13:13.020 Got JSON-RPC error response 00:13:13.020 response: 00:13:13.020 { 00:13:13.020 "code": -5, 00:13:13.020 "message": "Input/output error" 00:13:13.020 } 00:13:13.020 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:13.020 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:13.021 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:13.021 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:13.021 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:13.021 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:13.021 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:13.021 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.279 request: 00:13:13.279 { 00:13:13.279 "name": "nvme0", 00:13:13.279 "trtype": "tcp", 00:13:13.279 "traddr": "10.0.0.2", 00:13:13.279 "adrfam": "ipv4", 00:13:13.279 "trsvcid": "4420", 00:13:13.279 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:13.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de", 00:13:13.279 "prchk_reftag": false, 00:13:13.279 "prchk_guard": false, 00:13:13.279 "hdgst": false, 00:13:13.279 "ddgst": false, 00:13:13.279 "dhchap_key": "key3", 00:13:13.279 "method": "bdev_nvme_attach_controller", 00:13:13.279 "req_id": 1 00:13:13.279 } 00:13:13.279 Got JSON-RPC error response 00:13:13.279 response: 00:13:13.279 { 00:13:13.279 "code": -5, 00:13:13.279 "message": "Input/output error" 00:13:13.279 } 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:13.279 19:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:13.538 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:14.104 request: 00:13:14.104 { 00:13:14.104 "name": "nvme0", 00:13:14.104 "trtype": "tcp", 00:13:14.104 "traddr": "10.0.0.2", 00:13:14.104 "adrfam": "ipv4", 00:13:14.104 "trsvcid": "4420", 00:13:14.104 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:14.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de", 00:13:14.104 "prchk_reftag": false, 00:13:14.104 "prchk_guard": false, 00:13:14.104 "hdgst": false, 00:13:14.104 "ddgst": false, 00:13:14.104 "dhchap_key": "key0", 00:13:14.104 "dhchap_ctrlr_key": "key1", 00:13:14.104 "method": "bdev_nvme_attach_controller", 00:13:14.104 "req_id": 1 00:13:14.104 } 00:13:14.104 Got JSON-RPC error response 00:13:14.104 response: 00:13:14.104 { 00:13:14.104 "code": -5, 00:13:14.104 "message": "Input/output error" 00:13:14.104 } 00:13:14.104 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:14.104 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.104 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.104 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.104 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:14.104 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:14.362 00:13:14.362 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:14.362 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:14.362 19:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.620 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.620 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.620 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 68461 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 68461 ']' 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 68461 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68461 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:14.878 killing process with pid 68461 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68461' 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 68461 00:13:14.878 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 68461 00:13:15.141 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:15.141 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.141 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:15.399 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.399 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:15.399 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.399 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.399 rmmod nvme_tcp 00:13:15.399 rmmod nvme_fabrics 00:13:15.399 rmmod nvme_keyring 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 71485 ']' 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 71485 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 71485 ']' 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 71485 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71485 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.400 killing process with pid 71485 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71485' 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 71485 00:13:15.400 19:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 71485 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.REn /tmp/spdk.key-sha256.L9B /tmp/spdk.key-sha384.29m /tmp/spdk.key-sha512.Zds /tmp/spdk.key-sha512.6Dh /tmp/spdk.key-sha384.PS2 /tmp/spdk.key-sha256.67W '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:15.659 00:13:15.659 real 2m51.281s 00:13:15.659 user 6m49.658s 00:13:15.659 sys 0m26.555s 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.659 ************************************ 00:13:15.659 END TEST nvmf_auth_target 00:13:15.659 ************************************ 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.659 ************************************ 00:13:15.659 START TEST nvmf_bdevio_no_huge 00:13:15.659 ************************************ 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:15.659 * Looking for test storage... 00:13:15.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.659 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:15.660 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:15.660 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.660 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.660 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.660 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.660 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.660 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.660 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:15.919 Cannot find device "nvmf_tgt_br" 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:15.919 Cannot find device "nvmf_tgt_br2" 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:15.919 Cannot find device "nvmf_tgt_br" 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:15.919 Cannot find device "nvmf_tgt_br2" 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:15.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:15.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:15.919 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:15.920 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:16.178 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:16.178 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:16.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:16.179 00:13:16.179 --- 10.0.0.2 ping statistics --- 00:13:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.179 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:16.179 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:16.179 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:13:16.179 00:13:16.179 --- 10.0.0.3 ping statistics --- 00:13:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.179 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:16.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:16.179 00:13:16.179 --- 10.0.0.1 ping statistics --- 00:13:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.179 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=71809 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 71809 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 71809 ']' 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.179 19:51:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:16.179 [2024-07-24 19:51:44.729608] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:16.179 [2024-07-24 19:51:44.729758] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:16.438 [2024-07-24 19:51:44.879828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:16.438 [2024-07-24 19:51:45.039585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.438 [2024-07-24 19:51:45.039655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.438 [2024-07-24 19:51:45.039671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.438 [2024-07-24 19:51:45.039682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.438 [2024-07-24 19:51:45.039691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.438 [2024-07-24 19:51:45.040116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:16.438 [2024-07-24 19:51:45.040234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:16.438 [2024-07-24 19:51:45.040324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:16.438 [2024-07-24 19:51:45.040335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.438 [2024-07-24 19:51:45.046147] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.373 [2024-07-24 19:51:45.788472] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.373 Malloc0 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:17.373 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:17.374 [2024-07-24 19:51:45.836623] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:17.374 { 00:13:17.374 "params": { 00:13:17.374 "name": "Nvme$subsystem", 00:13:17.374 "trtype": "$TEST_TRANSPORT", 00:13:17.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:17.374 "adrfam": "ipv4", 00:13:17.374 "trsvcid": "$NVMF_PORT", 00:13:17.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:17.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:17.374 "hdgst": ${hdgst:-false}, 00:13:17.374 "ddgst": ${ddgst:-false} 00:13:17.374 }, 00:13:17.374 "method": "bdev_nvme_attach_controller" 00:13:17.374 } 00:13:17.374 EOF 00:13:17.374 )") 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:17.374 19:51:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:17.374 "params": { 00:13:17.374 "name": "Nvme1", 00:13:17.374 "trtype": "tcp", 00:13:17.374 "traddr": "10.0.0.2", 00:13:17.374 "adrfam": "ipv4", 00:13:17.374 "trsvcid": "4420", 00:13:17.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.374 "hdgst": false, 00:13:17.374 "ddgst": false 00:13:17.374 }, 00:13:17.374 "method": "bdev_nvme_attach_controller" 00:13:17.374 }' 00:13:17.374 [2024-07-24 19:51:45.888218] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:17.374 [2024-07-24 19:51:45.888296] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71845 ] 00:13:17.374 [2024-07-24 19:51:46.028928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.632 [2024-07-24 19:51:46.185109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.632 [2024-07-24 19:51:46.185263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.632 [2024-07-24 19:51:46.185269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.632 [2024-07-24 19:51:46.199095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:17.891 I/O targets: 00:13:17.891 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:17.891 00:13:17.891 00:13:17.891 CUnit - A unit testing framework for C - Version 2.1-3 00:13:17.891 http://cunit.sourceforge.net/ 00:13:17.891 00:13:17.891 00:13:17.891 Suite: bdevio tests on: Nvme1n1 00:13:17.891 Test: blockdev write read block ...passed 00:13:17.891 Test: blockdev write zeroes read block ...passed 00:13:17.891 Test: blockdev write zeroes read no split ...passed 00:13:17.891 Test: blockdev write zeroes read split ...passed 00:13:17.891 Test: blockdev write zeroes read split partial ...passed 00:13:17.891 Test: blockdev reset ...[2024-07-24 19:51:46.403787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:17.891 [2024-07-24 19:51:46.403903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa09870 (9): Bad file descriptor 00:13:17.891 [2024-07-24 19:51:46.420358] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:17.891 passed 00:13:17.891 Test: blockdev write read 8 blocks ...passed 00:13:17.891 Test: blockdev write read size > 128k ...passed 00:13:17.891 Test: blockdev write read invalid size ...passed 00:13:17.891 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:17.891 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:17.891 Test: blockdev write read max offset ...passed 00:13:17.891 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:17.891 Test: blockdev writev readv 8 blocks ...passed 00:13:17.891 Test: blockdev writev readv 30 x 1block ...passed 00:13:17.891 Test: blockdev writev readv block ...passed 00:13:17.891 Test: blockdev writev readv size > 128k ...passed 00:13:17.891 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:17.891 Test: blockdev comparev and writev ...[2024-07-24 19:51:46.433147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.891 [2024-07-24 19:51:46.433767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.434158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.891 [2024-07-24 19:51:46.434180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.434519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.891 [2024-07-24 19:51:46.434562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.434581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.891 [2024-07-24 19:51:46.434591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.434887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.891 [2024-07-24 19:51:46.434905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.434921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.891 [2024-07-24 19:51:46.434931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.435371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.891 [2024-07-24 19:51:46.435399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.435417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:17.891 [2024-07-24 19:51:46.435427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:17.891 passed 00:13:17.891 Test: blockdev nvme passthru rw ...passed 00:13:17.891 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:51:46.436852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.891 [2024-07-24 19:51:46.436945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.437255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.891 [2024-07-24 19:51:46.437289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.437590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.891 [2024-07-24 19:51:46.437622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:17.891 [2024-07-24 19:51:46.437876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:17.891 [2024-07-24 19:51:46.437970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:17.891 passed 00:13:17.891 Test: blockdev nvme admin passthru ...passed 00:13:17.891 Test: blockdev copy ...passed 00:13:17.891 00:13:17.891 Run Summary: Type Total Ran Passed Failed Inactive 00:13:17.891 suites 1 1 n/a 0 0 00:13:17.891 tests 23 23 23 0 0 00:13:17.891 asserts 152 152 152 0 n/a 00:13:17.891 00:13:17.891 Elapsed time = 0.176 seconds 00:13:18.151 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.151 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.151 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.151 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.151 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:18.151 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:18.151 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.151 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.410 rmmod nvme_tcp 00:13:18.410 rmmod nvme_fabrics 00:13:18.410 rmmod nvme_keyring 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 71809 ']' 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 71809 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 71809 ']' 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 71809 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71809 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:18.410 killing process with pid 71809 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71809' 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 71809 00:13:18.410 19:51:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 71809 00:13:18.668 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.668 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.668 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.668 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.668 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.668 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.668 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.668 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:18.926 00:13:18.926 real 0m3.142s 00:13:18.926 user 0m10.313s 00:13:18.926 sys 0m1.262s 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.926 ************************************ 00:13:18.926 END TEST nvmf_bdevio_no_huge 00:13:18.926 ************************************ 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.926 ************************************ 00:13:18.926 START TEST nvmf_tls 00:13:18.926 ************************************ 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:18.926 * Looking for test storage... 00:13:18.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.926 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:18.927 Cannot find device "nvmf_tgt_br" 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:18.927 Cannot find device "nvmf_tgt_br2" 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:18.927 Cannot find device "nvmf_tgt_br" 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:18.927 Cannot find device "nvmf_tgt_br2" 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:18.927 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:19.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:19.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:19.185 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:19.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:13:19.185 00:13:19.185 --- 10.0.0.2 ping statistics --- 00:13:19.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.185 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:19.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:19.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:19.442 00:13:19.442 --- 10.0.0.3 ping statistics --- 00:13:19.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.442 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:19.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:19.442 00:13:19.442 --- 10.0.0.1 ping statistics --- 00:13:19.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.442 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:19.442 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72027 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72027 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72027 ']' 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.443 19:51:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:19.443 [2024-07-24 19:51:47.940946] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:19.443 [2024-07-24 19:51:47.941059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.443 [2024-07-24 19:51:48.086150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.700 [2024-07-24 19:51:48.246390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.700 [2024-07-24 19:51:48.246439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.700 [2024-07-24 19:51:48.246451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.701 [2024-07-24 19:51:48.246459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.701 [2024-07-24 19:51:48.246467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.701 [2024-07-24 19:51:48.246495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.267 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.267 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:20.267 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:20.267 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.267 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:20.267 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.267 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:20.267 19:51:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:20.525 true 00:13:20.784 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:20.784 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:21.040 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:21.040 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:21.040 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:21.299 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:21.299 19:51:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:21.557 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:21.557 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:21.557 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:21.815 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:21.815 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:22.073 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:22.073 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:22.073 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:22.073 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:22.330 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:22.330 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:22.330 19:51:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:22.588 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:22.588 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:22.845 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:22.845 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:22.845 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:23.102 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:23.102 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.h74Z6jgWdr 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.sD37nB2xpa 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.h74Z6jgWdr 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sD37nB2xpa 00:13:23.363 19:51:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:23.622 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:24.186 [2024-07-24 19:51:52.574684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:24.186 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.h74Z6jgWdr 00:13:24.186 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.h74Z6jgWdr 00:13:24.186 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:24.443 [2024-07-24 19:51:52.930708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.443 19:51:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:24.702 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:24.960 [2024-07-24 19:51:53.466814] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:24.960 [2024-07-24 19:51:53.467051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.960 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:25.217 malloc0 00:13:25.217 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:25.476 19:51:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.h74Z6jgWdr 00:13:25.734 [2024-07-24 19:51:54.201818] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:25.734 19:51:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.h74Z6jgWdr 00:13:37.934 Initializing NVMe Controllers 00:13:37.934 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:37.934 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:37.934 Initialization complete. Launching workers. 00:13:37.934 ======================================================== 00:13:37.934 Latency(us) 00:13:37.934 Device Information : IOPS MiB/s Average min max 00:13:37.934 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9548.79 37.30 6704.15 1207.93 8623.46 00:13:37.934 ======================================================== 00:13:37.934 Total : 9548.79 37.30 6704.15 1207.93 8623.46 00:13:37.934 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h74Z6jgWdr 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.h74Z6jgWdr' 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72258 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72258 /var/tmp/bdevperf.sock 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72258 ']' 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.934 19:52:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:37.934 [2024-07-24 19:52:04.458877] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:37.934 [2024-07-24 19:52:04.458957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72258 ] 00:13:37.934 [2024-07-24 19:52:04.591067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.934 [2024-07-24 19:52:04.705056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.934 [2024-07-24 19:52:04.760236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:37.934 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.934 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:37.934 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.h74Z6jgWdr 00:13:37.934 [2024-07-24 19:52:05.614505] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:37.934 [2024-07-24 19:52:05.614643] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:37.934 TLSTESTn1 00:13:37.934 19:52:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:37.934 Running I/O for 10 seconds... 00:13:47.912 00:13:47.912 Latency(us) 00:13:47.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.912 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:47.912 Verification LBA range: start 0x0 length 0x2000 00:13:47.912 TLSTESTn1 : 10.02 3954.02 15.45 0.00 0.00 32308.84 7149.38 33602.09 00:13:47.912 =================================================================================================================== 00:13:47.912 Total : 3954.02 15.45 0.00 0.00 32308.84 7149.38 33602.09 00:13:47.912 0 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72258 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72258 ']' 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72258 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72258 00:13:47.912 killing process with pid 72258 00:13:47.912 Received shutdown signal, test time was about 10.000000 seconds 00:13:47.912 00:13:47.912 Latency(us) 00:13:47.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.912 =================================================================================================================== 00:13:47.912 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72258' 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72258 00:13:47.912 [2024-07-24 19:52:15.897926] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:47.912 19:52:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72258 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sD37nB2xpa 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sD37nB2xpa 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:47.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sD37nB2xpa 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sD37nB2xpa' 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72390 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72390 /var/tmp/bdevperf.sock 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72390 ']' 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.912 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.913 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.913 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.913 19:52:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.913 [2024-07-24 19:52:16.173261] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:47.913 [2024-07-24 19:52:16.173674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72390 ] 00:13:47.913 [2024-07-24 19:52:16.309199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.913 [2024-07-24 19:52:16.420668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.913 [2024-07-24 19:52:16.476019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.479 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.479 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:48.479 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sD37nB2xpa 00:13:48.737 [2024-07-24 19:52:17.345181] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:48.737 [2024-07-24 19:52:17.345910] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:48.737 [2024-07-24 19:52:17.354458] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:48.737 [2024-07-24 19:52:17.354944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb401f0 (107): Transport endpoint is not connected 00:13:48.737 [2024-07-24 19:52:17.355926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb401f0 (9): Bad file descriptor 00:13:48.737 [2024-07-24 19:52:17.356922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:48.737 [2024-07-24 19:52:17.357435] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:48.737 request: 00:13:48.737 { 00:13:48.737 "name": "TLSTEST", 00:13:48.737 "trtype": "tcp", 00:13:48.737 "traddr": "10.0.0.2", 00:13:48.737 "adrfam": "ipv4", 00:13:48.737 "trsvcid": "4420", 00:13:48.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:48.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:48.737 "prchk_reftag": false, 00:13:48.737 "prchk_guard": false, 00:13:48.737 "hdgst": false, 00:13:48.737 "ddgst": false, 00:13:48.737 "psk": "/tmp/tmp.sD37nB2xpa", 00:13:48.737 "method": "bdev_nvme_attach_controller", 00:13:48.737 "req_id": 1 00:13:48.737 } 00:13:48.737 Got JSON-RPC error response 00:13:48.737 response: 00:13:48.737 { 00:13:48.737 "code": -5, 00:13:48.737 "message": "Input/output error" 00:13:48.737 } 00:13:48.737 [2024-07-24 19:52:17.357684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72390 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72390 ']' 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72390 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72390 00:13:48.737 killing process with pid 72390 00:13:48.737 Received shutdown signal, test time was about 10.000000 seconds 00:13:48.737 00:13:48.737 Latency(us) 00:13:48.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.737 =================================================================================================================== 00:13:48.737 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72390' 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72390 00:13:48.737 [2024-07-24 19:52:17.401879] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:48.737 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72390 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h74Z6jgWdr 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h74Z6jgWdr 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h74Z6jgWdr 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.h74Z6jgWdr' 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72419 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72419 /var/tmp/bdevperf.sock 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72419 ']' 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:48.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.995 19:52:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:49.254 [2024-07-24 19:52:17.669500] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:49.254 [2024-07-24 19:52:17.669952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72419 ] 00:13:49.254 [2024-07-24 19:52:17.802957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.254 [2024-07-24 19:52:17.909463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.513 [2024-07-24 19:52:17.965408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:50.079 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.079 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:50.079 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.h74Z6jgWdr 00:13:50.339 [2024-07-24 19:52:18.844450] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:50.339 [2024-07-24 19:52:18.844583] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:50.339 [2024-07-24 19:52:18.854512] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:50.339 [2024-07-24 19:52:18.854554] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:50.339 [2024-07-24 19:52:18.854622] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:50.339 [2024-07-24 19:52:18.855514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103d1f0 (107): Transport endpoint is not connected 00:13:50.339 [2024-07-24 19:52:18.856497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103d1f0 (9): Bad file descriptor 00:13:50.339 [2024-07-24 19:52:18.857493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:50.339 [2024-07-24 19:52:18.857528] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:50.339 [2024-07-24 19:52:18.857548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:50.339 request: 00:13:50.339 { 00:13:50.339 "name": "TLSTEST", 00:13:50.339 "trtype": "tcp", 00:13:50.339 "traddr": "10.0.0.2", 00:13:50.339 "adrfam": "ipv4", 00:13:50.339 "trsvcid": "4420", 00:13:50.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:50.339 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:50.339 "prchk_reftag": false, 00:13:50.339 "prchk_guard": false, 00:13:50.339 "hdgst": false, 00:13:50.339 "ddgst": false, 00:13:50.339 "psk": "/tmp/tmp.h74Z6jgWdr", 00:13:50.339 "method": "bdev_nvme_attach_controller", 00:13:50.339 "req_id": 1 00:13:50.339 } 00:13:50.339 Got JSON-RPC error response 00:13:50.339 response: 00:13:50.339 { 00:13:50.339 "code": -5, 00:13:50.339 "message": "Input/output error" 00:13:50.339 } 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72419 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72419 ']' 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72419 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72419 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72419' 00:13:50.339 killing process with pid 72419 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72419 00:13:50.339 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.339 00:13:50.339 Latency(us) 00:13:50.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.339 =================================================================================================================== 00:13:50.339 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.339 19:52:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72419 00:13:50.339 [2024-07-24 19:52:18.907621] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h74Z6jgWdr 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h74Z6jgWdr 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:50.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h74Z6jgWdr 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.h74Z6jgWdr' 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72442 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72442 /var/tmp/bdevperf.sock 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72442 ']' 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.598 19:52:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.598 [2024-07-24 19:52:19.178385] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:50.598 [2024-07-24 19:52:19.178613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72442 ] 00:13:50.856 [2024-07-24 19:52:19.311571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.856 [2024-07-24 19:52:19.419224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.856 [2024-07-24 19:52:19.474597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.790 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.790 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:51.790 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.h74Z6jgWdr 00:13:51.790 [2024-07-24 19:52:20.365055] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.790 [2024-07-24 19:52:20.365719] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:51.790 [2024-07-24 19:52:20.374826] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:51.791 [2024-07-24 19:52:20.375029] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:51.791 [2024-07-24 19:52:20.375256] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:51.791 [2024-07-24 19:52:20.375760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21291f0 (107): Transport endpoint is not connected 00:13:51.791 [2024-07-24 19:52:20.376728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21291f0 (9): Bad file descriptor 00:13:51.791 [2024-07-24 19:52:20.377724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:51.791 [2024-07-24 19:52:20.377768] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:51.791 [2024-07-24 19:52:20.377789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:51.791 request: 00:13:51.791 { 00:13:51.791 "name": "TLSTEST", 00:13:51.791 "trtype": "tcp", 00:13:51.791 "traddr": "10.0.0.2", 00:13:51.791 "adrfam": "ipv4", 00:13:51.791 "trsvcid": "4420", 00:13:51.791 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:51.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:51.791 "prchk_reftag": false, 00:13:51.791 "prchk_guard": false, 00:13:51.791 "hdgst": false, 00:13:51.791 "ddgst": false, 00:13:51.791 "psk": "/tmp/tmp.h74Z6jgWdr", 00:13:51.791 "method": "bdev_nvme_attach_controller", 00:13:51.791 "req_id": 1 00:13:51.791 } 00:13:51.791 Got JSON-RPC error response 00:13:51.791 response: 00:13:51.791 { 00:13:51.791 "code": -5, 00:13:51.791 "message": "Input/output error" 00:13:51.791 } 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72442 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72442 ']' 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72442 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72442 00:13:51.791 killing process with pid 72442 00:13:51.791 Received shutdown signal, test time was about 10.000000 seconds 00:13:51.791 00:13:51.791 Latency(us) 00:13:51.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.791 =================================================================================================================== 00:13:51.791 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72442' 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72442 00:13:51.791 [2024-07-24 19:52:20.422340] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:51.791 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72442 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:52.048 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72474 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72474 /var/tmp/bdevperf.sock 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72474 ']' 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.049 19:52:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.049 [2024-07-24 19:52:20.695005] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:52.049 [2024-07-24 19:52:20.695230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72474 ] 00:13:52.306 [2024-07-24 19:52:20.831867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.306 [2024-07-24 19:52:20.939474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.564 [2024-07-24 19:52:20.996492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:52.564 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.564 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:52.564 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:52.821 [2024-07-24 19:52:21.275008] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:52.821 [2024-07-24 19:52:21.276795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de0c00 (9): Bad file descriptor 00:13:52.821 [2024-07-24 19:52:21.277771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:52.821 [2024-07-24 19:52:21.277802] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:52.821 [2024-07-24 19:52:21.277821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:52.821 request: 00:13:52.821 { 00:13:52.821 "name": "TLSTEST", 00:13:52.821 "trtype": "tcp", 00:13:52.821 "traddr": "10.0.0.2", 00:13:52.821 "adrfam": "ipv4", 00:13:52.821 "trsvcid": "4420", 00:13:52.821 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:52.821 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:52.821 "prchk_reftag": false, 00:13:52.821 "prchk_guard": false, 00:13:52.821 "hdgst": false, 00:13:52.821 "ddgst": false, 00:13:52.821 "method": "bdev_nvme_attach_controller", 00:13:52.821 "req_id": 1 00:13:52.821 } 00:13:52.821 Got JSON-RPC error response 00:13:52.821 response: 00:13:52.821 { 00:13:52.821 "code": -5, 00:13:52.821 "message": "Input/output error" 00:13:52.821 } 00:13:52.821 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72474 00:13:52.821 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72474 ']' 00:13:52.821 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72474 00:13:52.821 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:52.821 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.821 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72474 00:13:52.821 killing process with pid 72474 00:13:52.821 Received shutdown signal, test time was about 10.000000 seconds 00:13:52.821 00:13:52.821 Latency(us) 00:13:52.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:52.821 =================================================================================================================== 00:13:52.821 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:52.821 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:52.821 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:52.822 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72474' 00:13:52.822 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72474 00:13:52.822 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72474 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 72027 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72027 ']' 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72027 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72027 00:13:53.080 killing process with pid 72027 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72027' 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72027 00:13:53.080 [2024-07-24 19:52:21.565741] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:53.080 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72027 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.MvmRCQKYCT 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.MvmRCQKYCT 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72505 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72505 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72505 ']' 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:53.338 19:52:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.338 [2024-07-24 19:52:21.894044] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:53.339 [2024-07-24 19:52:21.894122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.596 [2024-07-24 19:52:22.029504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.596 [2024-07-24 19:52:22.132230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.596 [2024-07-24 19:52:22.132286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.596 [2024-07-24 19:52:22.132297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.596 [2024-07-24 19:52:22.132306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.596 [2024-07-24 19:52:22.132319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.596 [2024-07-24 19:52:22.132352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.596 [2024-07-24 19:52:22.187020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.MvmRCQKYCT 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MvmRCQKYCT 00:13:54.529 19:52:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:54.529 [2024-07-24 19:52:23.136344] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:54.529 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:54.789 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:55.089 [2024-07-24 19:52:23.628475] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:55.089 [2024-07-24 19:52:23.628730] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.089 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:55.350 malloc0 00:13:55.350 19:52:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:55.608 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MvmRCQKYCT 00:13:55.866 [2024-07-24 19:52:24.423533] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MvmRCQKYCT 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MvmRCQKYCT' 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72560 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72560 /var/tmp/bdevperf.sock 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72560 ']' 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:55.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:55.866 19:52:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:55.866 [2024-07-24 19:52:24.493343] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:13:55.866 [2024-07-24 19:52:24.493465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72560 ] 00:13:56.124 [2024-07-24 19:52:24.631520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.124 [2024-07-24 19:52:24.749537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.382 [2024-07-24 19:52:24.807969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:56.947 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.947 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:13:56.947 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MvmRCQKYCT 00:13:57.206 [2024-07-24 19:52:25.649867] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:57.206 [2024-07-24 19:52:25.650012] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:57.206 TLSTESTn1 00:13:57.206 19:52:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:57.206 Running I/O for 10 seconds... 00:14:07.205 00:14:07.205 Latency(us) 00:14:07.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.205 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:07.205 Verification LBA range: start 0x0 length 0x2000 00:14:07.205 TLSTESTn1 : 10.01 3696.42 14.44 0.00 0.00 34572.58 5123.72 27644.28 00:14:07.205 =================================================================================================================== 00:14:07.205 Total : 3696.42 14.44 0.00 0.00 34572.58 5123.72 27644.28 00:14:07.205 0 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 72560 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72560 ']' 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72560 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72560 00:14:07.464 killing process with pid 72560 00:14:07.464 Received shutdown signal, test time was about 10.000000 seconds 00:14:07.464 00:14:07.464 Latency(us) 00:14:07.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.464 =================================================================================================================== 00:14:07.464 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72560' 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72560 00:14:07.464 [2024-07-24 19:52:35.917344] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:07.464 19:52:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72560 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.MvmRCQKYCT 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MvmRCQKYCT 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MvmRCQKYCT 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MvmRCQKYCT 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.MvmRCQKYCT' 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72689 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72689 /var/tmp/bdevperf.sock 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72689 ']' 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.722 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.722 [2024-07-24 19:52:36.210121] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:07.722 [2024-07-24 19:52:36.210238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72689 ] 00:14:07.722 [2024-07-24 19:52:36.349045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.981 [2024-07-24 19:52:36.463179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.981 [2024-07-24 19:52:36.517123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:07.981 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.981 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:07.981 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MvmRCQKYCT 00:14:08.239 [2024-07-24 19:52:36.876842] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.239 [2024-07-24 19:52:36.876936] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:08.239 [2024-07-24 19:52:36.876949] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.MvmRCQKYCT 00:14:08.239 request: 00:14:08.239 { 00:14:08.239 "name": "TLSTEST", 00:14:08.239 "trtype": "tcp", 00:14:08.239 "traddr": "10.0.0.2", 00:14:08.239 "adrfam": "ipv4", 00:14:08.239 "trsvcid": "4420", 00:14:08.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.239 "prchk_reftag": false, 00:14:08.239 "prchk_guard": false, 00:14:08.239 "hdgst": false, 00:14:08.239 "ddgst": false, 00:14:08.239 "psk": "/tmp/tmp.MvmRCQKYCT", 00:14:08.239 "method": "bdev_nvme_attach_controller", 00:14:08.239 "req_id": 1 00:14:08.239 } 00:14:08.239 Got JSON-RPC error response 00:14:08.239 response: 00:14:08.239 { 00:14:08.239 "code": -1, 00:14:08.239 "message": "Operation not permitted" 00:14:08.239 } 00:14:08.239 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 72689 00:14:08.239 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72689 ']' 00:14:08.239 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72689 00:14:08.239 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:08.498 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.498 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72689 00:14:08.498 killing process with pid 72689 00:14:08.498 Received shutdown signal, test time was about 10.000000 seconds 00:14:08.498 00:14:08.498 Latency(us) 00:14:08.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.498 =================================================================================================================== 00:14:08.498 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:08.498 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:08.498 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:08.498 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72689' 00:14:08.498 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72689 00:14:08.498 19:52:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72689 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 72505 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72505 ']' 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72505 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.498 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72505 00:14:08.821 killing process with pid 72505 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72505' 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72505 00:14:08.821 [2024-07-24 19:52:37.179260] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72505 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:08.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72714 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72714 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72714 ']' 00:14:08.821 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.822 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.822 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.822 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.822 19:52:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.107 [2024-07-24 19:52:37.472753] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:09.107 [2024-07-24 19:52:37.472885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.107 [2024-07-24 19:52:37.610157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.107 [2024-07-24 19:52:37.725174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.107 [2024-07-24 19:52:37.725251] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.107 [2024-07-24 19:52:37.725262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.107 [2024-07-24 19:52:37.725269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.107 [2024-07-24 19:52:37.725278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.107 [2024-07-24 19:52:37.725315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.366 [2024-07-24 19:52:37.777407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.MvmRCQKYCT 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.MvmRCQKYCT 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.MvmRCQKYCT 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MvmRCQKYCT 00:14:09.933 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:10.192 [2024-07-24 19:52:38.718090] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.192 19:52:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:10.452 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:10.711 [2024-07-24 19:52:39.258184] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:10.711 [2024-07-24 19:52:39.258445] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.711 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:10.969 malloc0 00:14:10.969 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:11.228 19:52:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MvmRCQKYCT 00:14:11.487 [2024-07-24 19:52:40.041482] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:11.487 [2024-07-24 19:52:40.041531] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:11.487 [2024-07-24 19:52:40.041569] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:11.487 request: 00:14:11.487 { 00:14:11.487 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.487 "host": "nqn.2016-06.io.spdk:host1", 00:14:11.488 "psk": "/tmp/tmp.MvmRCQKYCT", 00:14:11.488 "method": "nvmf_subsystem_add_host", 00:14:11.488 "req_id": 1 00:14:11.488 } 00:14:11.488 Got JSON-RPC error response 00:14:11.488 response: 00:14:11.488 { 00:14:11.488 "code": -32603, 00:14:11.488 "message": "Internal error" 00:14:11.488 } 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 72714 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72714 ']' 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72714 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72714 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:11.488 killing process with pid 72714 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72714' 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72714 00:14:11.488 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72714 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.MvmRCQKYCT 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72782 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72782 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72782 ']' 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.747 19:52:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.747 [2024-07-24 19:52:40.410644] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:11.747 [2024-07-24 19:52:40.410812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.006 [2024-07-24 19:52:40.559458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.006 [2024-07-24 19:52:40.667987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.006 [2024-07-24 19:52:40.668065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.006 [2024-07-24 19:52:40.668078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.006 [2024-07-24 19:52:40.668086] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.006 [2024-07-24 19:52:40.668093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.006 [2024-07-24 19:52:40.668129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.264 [2024-07-24 19:52:40.721435] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.MvmRCQKYCT 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MvmRCQKYCT 00:14:12.832 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:13.091 [2024-07-24 19:52:41.632488] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.091 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.351 19:52:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:13.609 [2024-07-24 19:52:42.264648] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:13.609 [2024-07-24 19:52:42.264905] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.868 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:14.126 malloc0 00:14:14.126 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:14.384 19:52:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MvmRCQKYCT 00:14:14.643 [2024-07-24 19:52:43.132004] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:14.643 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=72837 00:14:14.643 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.643 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.643 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 72837 /var/tmp/bdevperf.sock 00:14:14.643 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72837 ']' 00:14:14.643 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.643 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.644 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.644 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.644 19:52:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.644 [2024-07-24 19:52:43.206989] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:14.644 [2024-07-24 19:52:43.207088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72837 ] 00:14:14.901 [2024-07-24 19:52:43.348936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.901 [2024-07-24 19:52:43.493534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.901 [2024-07-24 19:52:43.557702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:15.834 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.834 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:15.834 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MvmRCQKYCT 00:14:15.834 [2024-07-24 19:52:44.474947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:15.834 [2024-07-24 19:52:44.475076] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:16.091 TLSTESTn1 00:14:16.091 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:16.445 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:16.445 "subsystems": [ 00:14:16.445 { 00:14:16.445 "subsystem": "keyring", 00:14:16.445 "config": [] 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "subsystem": "iobuf", 00:14:16.445 "config": [ 00:14:16.445 { 00:14:16.445 "method": "iobuf_set_options", 00:14:16.445 "params": { 00:14:16.445 "small_pool_count": 8192, 00:14:16.445 "large_pool_count": 1024, 00:14:16.445 "small_bufsize": 8192, 00:14:16.445 "large_bufsize": 135168 00:14:16.445 } 00:14:16.445 } 00:14:16.445 ] 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "subsystem": "sock", 00:14:16.445 "config": [ 00:14:16.445 { 00:14:16.445 "method": "sock_set_default_impl", 00:14:16.445 "params": { 00:14:16.445 "impl_name": "uring" 00:14:16.445 } 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "method": "sock_impl_set_options", 00:14:16.445 "params": { 00:14:16.445 "impl_name": "ssl", 00:14:16.445 "recv_buf_size": 4096, 00:14:16.445 "send_buf_size": 4096, 00:14:16.445 "enable_recv_pipe": true, 00:14:16.445 "enable_quickack": false, 00:14:16.445 "enable_placement_id": 0, 00:14:16.445 "enable_zerocopy_send_server": true, 00:14:16.445 "enable_zerocopy_send_client": false, 00:14:16.445 "zerocopy_threshold": 0, 00:14:16.445 "tls_version": 0, 00:14:16.445 "enable_ktls": false 00:14:16.445 } 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "method": "sock_impl_set_options", 00:14:16.445 "params": { 00:14:16.445 "impl_name": "posix", 00:14:16.445 "recv_buf_size": 2097152, 00:14:16.445 "send_buf_size": 2097152, 00:14:16.445 "enable_recv_pipe": true, 00:14:16.445 "enable_quickack": false, 00:14:16.445 "enable_placement_id": 0, 00:14:16.445 "enable_zerocopy_send_server": true, 00:14:16.445 "enable_zerocopy_send_client": false, 00:14:16.445 "zerocopy_threshold": 0, 00:14:16.445 "tls_version": 0, 00:14:16.445 "enable_ktls": false 00:14:16.445 } 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "method": "sock_impl_set_options", 00:14:16.445 "params": { 00:14:16.445 "impl_name": "uring", 00:14:16.445 "recv_buf_size": 2097152, 00:14:16.445 "send_buf_size": 2097152, 00:14:16.445 "enable_recv_pipe": true, 00:14:16.445 "enable_quickack": false, 00:14:16.445 "enable_placement_id": 0, 00:14:16.445 "enable_zerocopy_send_server": false, 00:14:16.445 "enable_zerocopy_send_client": false, 00:14:16.445 "zerocopy_threshold": 0, 00:14:16.445 "tls_version": 0, 00:14:16.445 "enable_ktls": false 00:14:16.445 } 00:14:16.445 } 00:14:16.445 ] 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "subsystem": "vmd", 00:14:16.445 "config": [] 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "subsystem": "accel", 00:14:16.445 "config": [ 00:14:16.445 { 00:14:16.445 "method": "accel_set_options", 00:14:16.445 "params": { 00:14:16.445 "small_cache_size": 128, 00:14:16.445 "large_cache_size": 16, 00:14:16.445 "task_count": 2048, 00:14:16.445 "sequence_count": 2048, 00:14:16.445 "buf_count": 2048 00:14:16.445 } 00:14:16.445 } 00:14:16.445 ] 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "subsystem": "bdev", 00:14:16.445 "config": [ 00:14:16.445 { 00:14:16.445 "method": "bdev_set_options", 00:14:16.445 "params": { 00:14:16.445 "bdev_io_pool_size": 65535, 00:14:16.445 "bdev_io_cache_size": 256, 00:14:16.445 "bdev_auto_examine": true, 00:14:16.445 "iobuf_small_cache_size": 128, 00:14:16.445 "iobuf_large_cache_size": 16 00:14:16.445 } 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "method": "bdev_raid_set_options", 00:14:16.445 "params": { 00:14:16.445 "process_window_size_kb": 1024, 00:14:16.445 "process_max_bandwidth_mb_sec": 0 00:14:16.445 } 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "method": "bdev_iscsi_set_options", 00:14:16.445 "params": { 00:14:16.445 "timeout_sec": 30 00:14:16.445 } 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "method": "bdev_nvme_set_options", 00:14:16.445 "params": { 00:14:16.445 "action_on_timeout": "none", 00:14:16.445 "timeout_us": 0, 00:14:16.445 "timeout_admin_us": 0, 00:14:16.445 "keep_alive_timeout_ms": 10000, 00:14:16.445 "arbitration_burst": 0, 00:14:16.445 "low_priority_weight": 0, 00:14:16.445 "medium_priority_weight": 0, 00:14:16.445 "high_priority_weight": 0, 00:14:16.445 "nvme_adminq_poll_period_us": 10000, 00:14:16.445 "nvme_ioq_poll_period_us": 0, 00:14:16.445 "io_queue_requests": 0, 00:14:16.445 "delay_cmd_submit": true, 00:14:16.445 "transport_retry_count": 4, 00:14:16.445 "bdev_retry_count": 3, 00:14:16.445 "transport_ack_timeout": 0, 00:14:16.445 "ctrlr_loss_timeout_sec": 0, 00:14:16.445 "reconnect_delay_sec": 0, 00:14:16.445 "fast_io_fail_timeout_sec": 0, 00:14:16.445 "disable_auto_failback": false, 00:14:16.445 "generate_uuids": false, 00:14:16.445 "transport_tos": 0, 00:14:16.445 "nvme_error_stat": false, 00:14:16.445 "rdma_srq_size": 0, 00:14:16.445 "io_path_stat": false, 00:14:16.445 "allow_accel_sequence": false, 00:14:16.445 "rdma_max_cq_size": 0, 00:14:16.445 "rdma_cm_event_timeout_ms": 0, 00:14:16.445 "dhchap_digests": [ 00:14:16.445 "sha256", 00:14:16.445 "sha384", 00:14:16.445 "sha512" 00:14:16.445 ], 00:14:16.445 "dhchap_dhgroups": [ 00:14:16.445 "null", 00:14:16.445 "ffdhe2048", 00:14:16.445 "ffdhe3072", 00:14:16.445 "ffdhe4096", 00:14:16.445 "ffdhe6144", 00:14:16.445 "ffdhe8192" 00:14:16.445 ] 00:14:16.445 } 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "method": "bdev_nvme_set_hotplug", 00:14:16.445 "params": { 00:14:16.445 "period_us": 100000, 00:14:16.445 "enable": false 00:14:16.445 } 00:14:16.445 }, 00:14:16.445 { 00:14:16.445 "method": "bdev_malloc_create", 00:14:16.446 "params": { 00:14:16.446 "name": "malloc0", 00:14:16.446 "num_blocks": 8192, 00:14:16.446 "block_size": 4096, 00:14:16.446 "physical_block_size": 4096, 00:14:16.446 "uuid": "4edee6c8-697a-4595-8750-ce9b3390bc56", 00:14:16.446 "optimal_io_boundary": 0, 00:14:16.446 "md_size": 0, 00:14:16.446 "dif_type": 0, 00:14:16.446 "dif_is_head_of_md": false, 00:14:16.446 "dif_pi_format": 0 00:14:16.446 } 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "method": "bdev_wait_for_examine" 00:14:16.446 } 00:14:16.446 ] 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "subsystem": "nbd", 00:14:16.446 "config": [] 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "subsystem": "scheduler", 00:14:16.446 "config": [ 00:14:16.446 { 00:14:16.446 "method": "framework_set_scheduler", 00:14:16.446 "params": { 00:14:16.446 "name": "static" 00:14:16.446 } 00:14:16.446 } 00:14:16.446 ] 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "subsystem": "nvmf", 00:14:16.446 "config": [ 00:14:16.446 { 00:14:16.446 "method": "nvmf_set_config", 00:14:16.446 "params": { 00:14:16.446 "discovery_filter": "match_any", 00:14:16.446 "admin_cmd_passthru": { 00:14:16.446 "identify_ctrlr": false 00:14:16.446 } 00:14:16.446 } 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "method": "nvmf_set_max_subsystems", 00:14:16.446 "params": { 00:14:16.446 "max_subsystems": 1024 00:14:16.446 } 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "method": "nvmf_set_crdt", 00:14:16.446 "params": { 00:14:16.446 "crdt1": 0, 00:14:16.446 "crdt2": 0, 00:14:16.446 "crdt3": 0 00:14:16.446 } 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "method": "nvmf_create_transport", 00:14:16.446 "params": { 00:14:16.446 "trtype": "TCP", 00:14:16.446 "max_queue_depth": 128, 00:14:16.446 "max_io_qpairs_per_ctrlr": 127, 00:14:16.446 "in_capsule_data_size": 4096, 00:14:16.446 "max_io_size": 131072, 00:14:16.446 "io_unit_size": 131072, 00:14:16.446 "max_aq_depth": 128, 00:14:16.446 "num_shared_buffers": 511, 00:14:16.446 "buf_cache_size": 4294967295, 00:14:16.446 "dif_insert_or_strip": false, 00:14:16.446 "zcopy": false, 00:14:16.446 "c2h_success": false, 00:14:16.446 "sock_priority": 0, 00:14:16.446 "abort_timeout_sec": 1, 00:14:16.446 "ack_timeout": 0, 00:14:16.446 "data_wr_pool_size": 0 00:14:16.446 } 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "method": "nvmf_create_subsystem", 00:14:16.446 "params": { 00:14:16.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.446 "allow_any_host": false, 00:14:16.446 "serial_number": "SPDK00000000000001", 00:14:16.446 "model_number": "SPDK bdev Controller", 00:14:16.446 "max_namespaces": 10, 00:14:16.446 "min_cntlid": 1, 00:14:16.446 "max_cntlid": 65519, 00:14:16.446 "ana_reporting": false 00:14:16.446 } 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "method": "nvmf_subsystem_add_host", 00:14:16.446 "params": { 00:14:16.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.446 "host": "nqn.2016-06.io.spdk:host1", 00:14:16.446 "psk": "/tmp/tmp.MvmRCQKYCT" 00:14:16.446 } 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "method": "nvmf_subsystem_add_ns", 00:14:16.446 "params": { 00:14:16.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.446 "namespace": { 00:14:16.446 "nsid": 1, 00:14:16.446 "bdev_name": "malloc0", 00:14:16.446 "nguid": "4EDEE6C8697A45958750CE9B3390BC56", 00:14:16.446 "uuid": "4edee6c8-697a-4595-8750-ce9b3390bc56", 00:14:16.446 "no_auto_visible": false 00:14:16.446 } 00:14:16.446 } 00:14:16.446 }, 00:14:16.446 { 00:14:16.446 "method": "nvmf_subsystem_add_listener", 00:14:16.446 "params": { 00:14:16.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.446 "listen_address": { 00:14:16.446 "trtype": "TCP", 00:14:16.446 "adrfam": "IPv4", 00:14:16.446 "traddr": "10.0.0.2", 00:14:16.446 "trsvcid": "4420" 00:14:16.446 }, 00:14:16.446 "secure_channel": true 00:14:16.446 } 00:14:16.446 } 00:14:16.446 ] 00:14:16.446 } 00:14:16.446 ] 00:14:16.446 }' 00:14:16.446 19:52:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:16.719 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:16.719 "subsystems": [ 00:14:16.719 { 00:14:16.719 "subsystem": "keyring", 00:14:16.719 "config": [] 00:14:16.719 }, 00:14:16.719 { 00:14:16.719 "subsystem": "iobuf", 00:14:16.719 "config": [ 00:14:16.719 { 00:14:16.719 "method": "iobuf_set_options", 00:14:16.719 "params": { 00:14:16.720 "small_pool_count": 8192, 00:14:16.720 "large_pool_count": 1024, 00:14:16.720 "small_bufsize": 8192, 00:14:16.720 "large_bufsize": 135168 00:14:16.720 } 00:14:16.720 } 00:14:16.720 ] 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "subsystem": "sock", 00:14:16.720 "config": [ 00:14:16.720 { 00:14:16.720 "method": "sock_set_default_impl", 00:14:16.720 "params": { 00:14:16.720 "impl_name": "uring" 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "sock_impl_set_options", 00:14:16.720 "params": { 00:14:16.720 "impl_name": "ssl", 00:14:16.720 "recv_buf_size": 4096, 00:14:16.720 "send_buf_size": 4096, 00:14:16.720 "enable_recv_pipe": true, 00:14:16.720 "enable_quickack": false, 00:14:16.720 "enable_placement_id": 0, 00:14:16.720 "enable_zerocopy_send_server": true, 00:14:16.720 "enable_zerocopy_send_client": false, 00:14:16.720 "zerocopy_threshold": 0, 00:14:16.720 "tls_version": 0, 00:14:16.720 "enable_ktls": false 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "sock_impl_set_options", 00:14:16.720 "params": { 00:14:16.720 "impl_name": "posix", 00:14:16.720 "recv_buf_size": 2097152, 00:14:16.720 "send_buf_size": 2097152, 00:14:16.720 "enable_recv_pipe": true, 00:14:16.720 "enable_quickack": false, 00:14:16.720 "enable_placement_id": 0, 00:14:16.720 "enable_zerocopy_send_server": true, 00:14:16.720 "enable_zerocopy_send_client": false, 00:14:16.720 "zerocopy_threshold": 0, 00:14:16.720 "tls_version": 0, 00:14:16.720 "enable_ktls": false 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "sock_impl_set_options", 00:14:16.720 "params": { 00:14:16.720 "impl_name": "uring", 00:14:16.720 "recv_buf_size": 2097152, 00:14:16.720 "send_buf_size": 2097152, 00:14:16.720 "enable_recv_pipe": true, 00:14:16.720 "enable_quickack": false, 00:14:16.720 "enable_placement_id": 0, 00:14:16.720 "enable_zerocopy_send_server": false, 00:14:16.720 "enable_zerocopy_send_client": false, 00:14:16.720 "zerocopy_threshold": 0, 00:14:16.720 "tls_version": 0, 00:14:16.720 "enable_ktls": false 00:14:16.720 } 00:14:16.720 } 00:14:16.720 ] 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "subsystem": "vmd", 00:14:16.720 "config": [] 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "subsystem": "accel", 00:14:16.720 "config": [ 00:14:16.720 { 00:14:16.720 "method": "accel_set_options", 00:14:16.720 "params": { 00:14:16.720 "small_cache_size": 128, 00:14:16.720 "large_cache_size": 16, 00:14:16.720 "task_count": 2048, 00:14:16.720 "sequence_count": 2048, 00:14:16.720 "buf_count": 2048 00:14:16.720 } 00:14:16.720 } 00:14:16.720 ] 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "subsystem": "bdev", 00:14:16.720 "config": [ 00:14:16.720 { 00:14:16.720 "method": "bdev_set_options", 00:14:16.720 "params": { 00:14:16.720 "bdev_io_pool_size": 65535, 00:14:16.720 "bdev_io_cache_size": 256, 00:14:16.720 "bdev_auto_examine": true, 00:14:16.720 "iobuf_small_cache_size": 128, 00:14:16.720 "iobuf_large_cache_size": 16 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "bdev_raid_set_options", 00:14:16.720 "params": { 00:14:16.720 "process_window_size_kb": 1024, 00:14:16.720 "process_max_bandwidth_mb_sec": 0 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "bdev_iscsi_set_options", 00:14:16.720 "params": { 00:14:16.720 "timeout_sec": 30 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "bdev_nvme_set_options", 00:14:16.720 "params": { 00:14:16.720 "action_on_timeout": "none", 00:14:16.720 "timeout_us": 0, 00:14:16.720 "timeout_admin_us": 0, 00:14:16.720 "keep_alive_timeout_ms": 10000, 00:14:16.720 "arbitration_burst": 0, 00:14:16.720 "low_priority_weight": 0, 00:14:16.720 "medium_priority_weight": 0, 00:14:16.720 "high_priority_weight": 0, 00:14:16.720 "nvme_adminq_poll_period_us": 10000, 00:14:16.720 "nvme_ioq_poll_period_us": 0, 00:14:16.720 "io_queue_requests": 512, 00:14:16.720 "delay_cmd_submit": true, 00:14:16.720 "transport_retry_count": 4, 00:14:16.720 "bdev_retry_count": 3, 00:14:16.720 "transport_ack_timeout": 0, 00:14:16.720 "ctrlr_loss_timeout_sec": 0, 00:14:16.720 "reconnect_delay_sec": 0, 00:14:16.720 "fast_io_fail_timeout_sec": 0, 00:14:16.720 "disable_auto_failback": false, 00:14:16.720 "generate_uuids": false, 00:14:16.720 "transport_tos": 0, 00:14:16.720 "nvme_error_stat": false, 00:14:16.720 "rdma_srq_size": 0, 00:14:16.720 "io_path_stat": false, 00:14:16.720 "allow_accel_sequence": false, 00:14:16.720 "rdma_max_cq_size": 0, 00:14:16.720 "rdma_cm_event_timeout_ms": 0, 00:14:16.720 "dhchap_digests": [ 00:14:16.720 "sha256", 00:14:16.720 "sha384", 00:14:16.720 "sha512" 00:14:16.720 ], 00:14:16.720 "dhchap_dhgroups": [ 00:14:16.720 "null", 00:14:16.720 "ffdhe2048", 00:14:16.720 "ffdhe3072", 00:14:16.720 "ffdhe4096", 00:14:16.720 "ffdhe6144", 00:14:16.720 "ffdhe8192" 00:14:16.720 ] 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "bdev_nvme_attach_controller", 00:14:16.720 "params": { 00:14:16.720 "name": "TLSTEST", 00:14:16.720 "trtype": "TCP", 00:14:16.720 "adrfam": "IPv4", 00:14:16.720 "traddr": "10.0.0.2", 00:14:16.720 "trsvcid": "4420", 00:14:16.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.720 "prchk_reftag": false, 00:14:16.720 "prchk_guard": false, 00:14:16.720 "ctrlr_loss_timeout_sec": 0, 00:14:16.720 "reconnect_delay_sec": 0, 00:14:16.720 "fast_io_fail_timeout_sec": 0, 00:14:16.720 "psk": "/tmp/tmp.MvmRCQKYCT", 00:14:16.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:16.720 "hdgst": false, 00:14:16.720 "ddgst": false 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "bdev_nvme_set_hotplug", 00:14:16.720 "params": { 00:14:16.720 "period_us": 100000, 00:14:16.720 "enable": false 00:14:16.720 } 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "method": "bdev_wait_for_examine" 00:14:16.720 } 00:14:16.720 ] 00:14:16.720 }, 00:14:16.720 { 00:14:16.720 "subsystem": "nbd", 00:14:16.720 "config": [] 00:14:16.720 } 00:14:16.720 ] 00:14:16.720 }' 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 72837 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72837 ']' 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72837 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72837 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:16.720 killing process with pid 72837 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72837' 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72837 00:14:16.720 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72837 00:14:16.720 Received shutdown signal, test time was about 10.000000 seconds 00:14:16.720 00:14:16.720 Latency(us) 00:14:16.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.720 =================================================================================================================== 00:14:16.720 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:16.720 [2024-07-24 19:52:45.266819] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 72782 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72782 ']' 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72782 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72782 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:16.978 killing process with pid 72782 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72782' 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72782 00:14:16.978 [2024-07-24 19:52:45.514364] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:16.978 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72782 00:14:17.236 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:17.236 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.236 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:17.236 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.236 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:17.236 "subsystems": [ 00:14:17.236 { 00:14:17.236 "subsystem": "keyring", 00:14:17.236 "config": [] 00:14:17.236 }, 00:14:17.236 { 00:14:17.236 "subsystem": "iobuf", 00:14:17.236 "config": [ 00:14:17.236 { 00:14:17.236 "method": "iobuf_set_options", 00:14:17.236 "params": { 00:14:17.236 "small_pool_count": 8192, 00:14:17.236 "large_pool_count": 1024, 00:14:17.236 "small_bufsize": 8192, 00:14:17.236 "large_bufsize": 135168 00:14:17.236 } 00:14:17.236 } 00:14:17.236 ] 00:14:17.236 }, 00:14:17.236 { 00:14:17.236 "subsystem": "sock", 00:14:17.236 "config": [ 00:14:17.236 { 00:14:17.236 "method": "sock_set_default_impl", 00:14:17.236 "params": { 00:14:17.236 "impl_name": "uring" 00:14:17.236 } 00:14:17.236 }, 00:14:17.236 { 00:14:17.236 "method": "sock_impl_set_options", 00:14:17.236 "params": { 00:14:17.236 "impl_name": "ssl", 00:14:17.236 "recv_buf_size": 4096, 00:14:17.236 "send_buf_size": 4096, 00:14:17.236 "enable_recv_pipe": true, 00:14:17.236 "enable_quickack": false, 00:14:17.236 "enable_placement_id": 0, 00:14:17.236 "enable_zerocopy_send_server": true, 00:14:17.236 "enable_zerocopy_send_client": false, 00:14:17.236 "zerocopy_threshold": 0, 00:14:17.236 "tls_version": 0, 00:14:17.236 "enable_ktls": false 00:14:17.236 } 00:14:17.236 }, 00:14:17.236 { 00:14:17.236 "method": "sock_impl_set_options", 00:14:17.236 "params": { 00:14:17.236 "impl_name": "posix", 00:14:17.236 "recv_buf_size": 2097152, 00:14:17.236 "send_buf_size": 2097152, 00:14:17.236 "enable_recv_pipe": true, 00:14:17.236 "enable_quickack": false, 00:14:17.236 "enable_placement_id": 0, 00:14:17.236 "enable_zerocopy_send_server": true, 00:14:17.236 "enable_zerocopy_send_client": false, 00:14:17.236 "zerocopy_threshold": 0, 00:14:17.236 "tls_version": 0, 00:14:17.236 "enable_ktls": false 00:14:17.236 } 00:14:17.236 }, 00:14:17.236 { 00:14:17.236 "method": "sock_impl_set_options", 00:14:17.236 "params": { 00:14:17.236 "impl_name": "uring", 00:14:17.236 "recv_buf_size": 2097152, 00:14:17.236 "send_buf_size": 2097152, 00:14:17.236 "enable_recv_pipe": true, 00:14:17.236 "enable_quickack": false, 00:14:17.236 "enable_placement_id": 0, 00:14:17.237 "enable_zerocopy_send_server": false, 00:14:17.237 "enable_zerocopy_send_client": false, 00:14:17.237 "zerocopy_threshold": 0, 00:14:17.237 "tls_version": 0, 00:14:17.237 "enable_ktls": false 00:14:17.237 } 00:14:17.237 } 00:14:17.237 ] 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "subsystem": "vmd", 00:14:17.237 "config": [] 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "subsystem": "accel", 00:14:17.237 "config": [ 00:14:17.237 { 00:14:17.237 "method": "accel_set_options", 00:14:17.237 "params": { 00:14:17.237 "small_cache_size": 128, 00:14:17.237 "large_cache_size": 16, 00:14:17.237 "task_count": 2048, 00:14:17.237 "sequence_count": 2048, 00:14:17.237 "buf_count": 2048 00:14:17.237 } 00:14:17.237 } 00:14:17.237 ] 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "subsystem": "bdev", 00:14:17.237 "config": [ 00:14:17.237 { 00:14:17.237 "method": "bdev_set_options", 00:14:17.237 "params": { 00:14:17.237 "bdev_io_pool_size": 65535, 00:14:17.237 "bdev_io_cache_size": 256, 00:14:17.237 "bdev_auto_examine": true, 00:14:17.237 "iobuf_small_cache_size": 128, 00:14:17.237 "iobuf_large_cache_size": 16 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "bdev_raid_set_options", 00:14:17.237 "params": { 00:14:17.237 "process_window_size_kb": 1024, 00:14:17.237 "process_max_bandwidth_mb_sec": 0 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "bdev_iscsi_set_options", 00:14:17.237 "params": { 00:14:17.237 "timeout_sec": 30 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "bdev_nvme_set_options", 00:14:17.237 "params": { 00:14:17.237 "action_on_timeout": "none", 00:14:17.237 "timeout_us": 0, 00:14:17.237 "timeout_admin_us": 0, 00:14:17.237 "keep_alive_timeout_ms": 10000, 00:14:17.237 "arbitration_burst": 0, 00:14:17.237 "low_priority_weight": 0, 00:14:17.237 "medium_priority_weight": 0, 00:14:17.237 "high_priority_weight": 0, 00:14:17.237 "nvme_adminq_poll_period_us": 10000, 00:14:17.237 "nvme_ioq_poll_period_us": 0, 00:14:17.237 "io_queue_requests": 0, 00:14:17.237 "delay_cmd_submit": true, 00:14:17.237 "transport_retry_count": 4, 00:14:17.237 "bdev_retry_count": 3, 00:14:17.237 "transport_ack_timeout": 0, 00:14:17.237 "ctrlr_loss_timeout_sec": 0, 00:14:17.237 "reconnect_delay_sec": 0, 00:14:17.237 "fast_io_fail_timeout_sec": 0, 00:14:17.237 "disable_auto_failback": false, 00:14:17.237 "generate_uuids": false, 00:14:17.237 "transport_tos": 0, 00:14:17.237 "nvme_error_stat": false, 00:14:17.237 "rdma_srq_size": 0, 00:14:17.237 "io_path_stat": false, 00:14:17.237 "allow_accel_sequence": false, 00:14:17.237 "rdma_max_cq_size": 0, 00:14:17.237 "rdma_cm_event_timeout_ms": 0, 00:14:17.237 "dhchap_digests": [ 00:14:17.237 "sha256", 00:14:17.237 "sha384", 00:14:17.237 "sha512" 00:14:17.237 ], 00:14:17.237 "dhchap_dhgroups": [ 00:14:17.237 "null", 00:14:17.237 "ffdhe2048", 00:14:17.237 "ffdhe3072", 00:14:17.237 "ffdhe4096", 00:14:17.237 "ffdhe6144", 00:14:17.237 "ffdhe8192" 00:14:17.237 ] 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "bdev_nvme_set_hotplug", 00:14:17.237 "params": { 00:14:17.237 "period_us": 100000, 00:14:17.237 "enable": false 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "bdev_malloc_create", 00:14:17.237 "params": { 00:14:17.237 "name": "malloc0", 00:14:17.237 "num_blocks": 8192, 00:14:17.237 "block_size": 4096, 00:14:17.237 "physical_block_size": 4096, 00:14:17.237 "uuid": "4edee6c8-697a-4595-8750-ce9b3390bc56", 00:14:17.237 "optimal_io_boundary": 0, 00:14:17.237 "md_size": 0, 00:14:17.237 "dif_type": 0, 00:14:17.237 "dif_is_head_of_md": false, 00:14:17.237 "dif_pi_format": 0 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "bdev_wait_for_examine" 00:14:17.237 } 00:14:17.237 ] 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "subsystem": "nbd", 00:14:17.237 "config": [] 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "subsystem": "scheduler", 00:14:17.237 "config": [ 00:14:17.237 { 00:14:17.237 "method": "framework_set_scheduler", 00:14:17.237 "params": { 00:14:17.237 "name": "static" 00:14:17.237 } 00:14:17.237 } 00:14:17.237 ] 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "subsystem": "nvmf", 00:14:17.237 "config": [ 00:14:17.237 { 00:14:17.237 "method": "nvmf_set_config", 00:14:17.237 "params": { 00:14:17.237 "discovery_filter": "match_any", 00:14:17.237 "admin_cmd_passthru": { 00:14:17.237 "identify_ctrlr": false 00:14:17.237 } 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "nvmf_set_max_subsystems", 00:14:17.237 "params": { 00:14:17.237 "max_subsystems": 1024 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "nvmf_set_crdt", 00:14:17.237 "params": { 00:14:17.237 "crdt1": 0, 00:14:17.237 "crdt2": 0, 00:14:17.237 "crdt3": 0 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "nvmf_create_transport", 00:14:17.237 "params": { 00:14:17.237 "trtype": "TCP", 00:14:17.237 "max_queue_depth": 128, 00:14:17.237 "max_io_qpairs_per_ctrlr": 127, 00:14:17.237 "in_capsule_data_size": 4096, 00:14:17.237 "max_io_size": 131072, 00:14:17.237 "io_unit_size": 131072, 00:14:17.237 "max_aq_depth": 128, 00:14:17.237 "num_shared_buffers": 511, 00:14:17.237 "buf_cache_size": 4294967295, 00:14:17.237 "dif_insert_or_strip": false, 00:14:17.237 "zcopy": false, 00:14:17.237 "c2h_success": false, 00:14:17.237 "sock_priority": 0, 00:14:17.237 "abort_timeout_sec": 1, 00:14:17.237 "ack_timeout": 0, 00:14:17.237 "data_wr_pool_size": 0 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "nvmf_create_subsystem", 00:14:17.237 "params": { 00:14:17.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.237 "allow_any_host": false, 00:14:17.237 "serial_number": "SPDK00000000000001", 00:14:17.237 "model_number": "SPDK bdev Controller", 00:14:17.237 "max_namespaces": 10, 00:14:17.237 "min_cntlid": 1, 00:14:17.237 "max_cntlid": 65519, 00:14:17.237 "ana_reporting": false 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "nvmf_subsystem_add_host", 00:14:17.237 "params": { 00:14:17.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.237 "host": "nqn.2016-06.io.spdk:host1", 00:14:17.237 "psk": "/tmp/tmp.MvmRCQKYCT" 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "nvmf_subsystem_add_ns", 00:14:17.237 "params": { 00:14:17.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.237 "namespace": { 00:14:17.237 "nsid": 1, 00:14:17.237 "bdev_name": "malloc0", 00:14:17.237 "nguid": "4EDEE6C8697A45958750CE9B3390BC56", 00:14:17.237 "uuid": "4edee6c8-697a-4595-8750-ce9b3390bc56", 00:14:17.237 "no_auto_visible": false 00:14:17.237 } 00:14:17.237 } 00:14:17.237 }, 00:14:17.237 { 00:14:17.237 "method": "nvmf_subsystem_add_listener", 00:14:17.237 "params": { 00:14:17.237 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:17.237 "listen_address": { 00:14:17.237 "trtype": "TCP", 00:14:17.237 "adrfam": "IPv4", 00:14:17.237 "traddr": "10.0.0.2", 00:14:17.237 "trsvcid": "4420" 00:14:17.237 }, 00:14:17.237 "secure_channel": true 00:14:17.237 } 00:14:17.237 } 00:14:17.237 ] 00:14:17.237 } 00:14:17.237 ] 00:14:17.237 }' 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72880 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72880 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72880 ']' 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.237 19:52:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.237 [2024-07-24 19:52:45.810644] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:17.237 [2024-07-24 19:52:45.810757] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.495 [2024-07-24 19:52:45.951115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.495 [2024-07-24 19:52:46.067447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.495 [2024-07-24 19:52:46.067505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.495 [2024-07-24 19:52:46.067517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.495 [2024-07-24 19:52:46.067525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.495 [2024-07-24 19:52:46.067545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.495 [2024-07-24 19:52:46.067638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.753 [2024-07-24 19:52:46.235050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:17.753 [2024-07-24 19:52:46.304318] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.753 [2024-07-24 19:52:46.320226] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:17.753 [2024-07-24 19:52:46.336255] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:17.753 [2024-07-24 19:52:46.345901] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.319 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=72918 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 72918 /var/tmp/bdevperf.sock 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 72918 ']' 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:18.320 "subsystems": [ 00:14:18.320 { 00:14:18.320 "subsystem": "keyring", 00:14:18.320 "config": [] 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "subsystem": "iobuf", 00:14:18.320 "config": [ 00:14:18.320 { 00:14:18.320 "method": "iobuf_set_options", 00:14:18.320 "params": { 00:14:18.320 "small_pool_count": 8192, 00:14:18.320 "large_pool_count": 1024, 00:14:18.320 "small_bufsize": 8192, 00:14:18.320 "large_bufsize": 135168 00:14:18.320 } 00:14:18.320 } 00:14:18.320 ] 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "subsystem": "sock", 00:14:18.320 "config": [ 00:14:18.320 { 00:14:18.320 "method": "sock_set_default_impl", 00:14:18.320 "params": { 00:14:18.320 "impl_name": "uring" 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "sock_impl_set_options", 00:14:18.320 "params": { 00:14:18.320 "impl_name": "ssl", 00:14:18.320 "recv_buf_size": 4096, 00:14:18.320 "send_buf_size": 4096, 00:14:18.320 "enable_recv_pipe": true, 00:14:18.320 "enable_quickack": false, 00:14:18.320 "enable_placement_id": 0, 00:14:18.320 "enable_zerocopy_send_server": true, 00:14:18.320 "enable_zerocopy_send_client": false, 00:14:18.320 "zerocopy_threshold": 0, 00:14:18.320 "tls_version": 0, 00:14:18.320 "enable_ktls": false 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "sock_impl_set_options", 00:14:18.320 "params": { 00:14:18.320 "impl_name": "posix", 00:14:18.320 "recv_buf_size": 2097152, 00:14:18.320 "send_buf_size": 2097152, 00:14:18.320 "enable_recv_pipe": true, 00:14:18.320 "enable_quickack": false, 00:14:18.320 "enable_placement_id": 0, 00:14:18.320 "enable_zerocopy_send_server": true, 00:14:18.320 "enable_zerocopy_send_client": false, 00:14:18.320 "zerocopy_threshold": 0, 00:14:18.320 "tls_version": 0, 00:14:18.320 "enable_ktls": false 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "sock_impl_set_options", 00:14:18.320 "params": { 00:14:18.320 "impl_name": "uring", 00:14:18.320 "recv_buf_size": 2097152, 00:14:18.320 "send_buf_size": 2097152, 00:14:18.320 "enable_recv_pipe": true, 00:14:18.320 "enable_quickack": false, 00:14:18.320 "enable_placement_id": 0, 00:14:18.320 "enable_zerocopy_send_server": false, 00:14:18.320 "enable_zerocopy_send_client": false, 00:14:18.320 "zerocopy_threshold": 0, 00:14:18.320 "tls_version": 0, 00:14:18.320 "enable_ktls": false 00:14:18.320 } 00:14:18.320 } 00:14:18.320 ] 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "subsystem": "vmd", 00:14:18.320 "config": [] 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "subsystem": "accel", 00:14:18.320 "config": [ 00:14:18.320 { 00:14:18.320 "method": "accel_set_options", 00:14:18.320 "params": { 00:14:18.320 "small_cache_size": 128, 00:14:18.320 "large_cache_size": 16, 00:14:18.320 "task_count": 2048, 00:14:18.320 "sequence_count": 2048, 00:14:18.320 "buf_count": 2048 00:14:18.320 } 00:14:18.320 } 00:14:18.320 ] 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "subsystem": "bdev", 00:14:18.320 "config": [ 00:14:18.320 { 00:14:18.320 "method": "bdev_set_options", 00:14:18.320 "params": { 00:14:18.320 "bdev_io_pool_size": 65535, 00:14:18.320 "bdev_io_cache_size": 256, 00:14:18.320 "bdev_auto_examine": true, 00:14:18.320 "iobuf_small_cache_size": 128, 00:14:18.320 "iobuf_large_cache_size": 16 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "bdev_raid_set_options", 00:14:18.320 "params": { 00:14:18.320 "process_window_size_kb": 1024, 00:14:18.320 "process_max_bandwidth_mb_sec": 0 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "bdev_iscsi_set_options", 00:14:18.320 "params": { 00:14:18.320 "timeout_sec": 30 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "bdev_nvme_set_options", 00:14:18.320 "params": { 00:14:18.320 "action_on_timeout": "none", 00:14:18.320 "timeout_us": 0, 00:14:18.320 "timeout_admin_us": 0, 00:14:18.320 "keep_alive_timeout_ms": 10000, 00:14:18.320 "arbitration_burst": 0, 00:14:18.320 "low_priority_weight": 0, 00:14:18.320 "medium_priority_weight": 0, 00:14:18.320 "high_priority_weight": 0, 00:14:18.320 "nvme_adminq_poll_period_us": 10000, 00:14:18.320 "nvme_ioq_poll_period_us": 0, 00:14:18.320 "io_queue_requests": 512, 00:14:18.320 "delay_cmd_submit": true, 00:14:18.320 "transport_retry_count": 4, 00:14:18.320 "bdev_retry_count": 3, 00:14:18.320 "transport_ack_timeout": 0, 00:14:18.320 "ctrlr_loss_timeout_sec": 0, 00:14:18.320 "reconnect_delay_sec": 0, 00:14:18.320 "fast_io_fail_timeout_sec": 0, 00:14:18.320 "disable_aWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.320 19:52:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.320 uto_failback": false, 00:14:18.320 "generate_uuids": false, 00:14:18.320 "transport_tos": 0, 00:14:18.320 "nvme_error_stat": false, 00:14:18.320 "rdma_srq_size": 0, 00:14:18.320 "io_path_stat": false, 00:14:18.320 "allow_accel_sequence": false, 00:14:18.320 "rdma_max_cq_size": 0, 00:14:18.320 "rdma_cm_event_timeout_ms": 0, 00:14:18.320 "dhchap_digests": [ 00:14:18.320 "sha256", 00:14:18.320 "sha384", 00:14:18.320 "sha512" 00:14:18.320 ], 00:14:18.320 "dhchap_dhgroups": [ 00:14:18.320 "null", 00:14:18.320 "ffdhe2048", 00:14:18.320 "ffdhe3072", 00:14:18.320 "ffdhe4096", 00:14:18.320 "ffdhe6144", 00:14:18.320 "ffdhe8192" 00:14:18.320 ] 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "bdev_nvme_attach_controller", 00:14:18.320 "params": { 00:14:18.320 "name": "TLSTEST", 00:14:18.320 "trtype": "TCP", 00:14:18.320 "adrfam": "IPv4", 00:14:18.320 "traddr": "10.0.0.2", 00:14:18.320 "trsvcid": "4420", 00:14:18.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.320 "prchk_reftag": false, 00:14:18.320 "prchk_guard": false, 00:14:18.320 "ctrlr_loss_timeout_sec": 0, 00:14:18.320 "reconnect_delay_sec": 0, 00:14:18.320 "fast_io_fail_timeout_sec": 0, 00:14:18.320 "psk": "/tmp/tmp.MvmRCQKYCT", 00:14:18.320 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.320 "hdgst": false, 00:14:18.320 "ddgst": false 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "bdev_nvme_set_hotplug", 00:14:18.320 "params": { 00:14:18.320 "period_us": 100000, 00:14:18.320 "enable": false 00:14:18.320 } 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "method": "bdev_wait_for_examine" 00:14:18.320 } 00:14:18.320 ] 00:14:18.320 }, 00:14:18.320 { 00:14:18.320 "subsystem": "nbd", 00:14:18.320 "config": [] 00:14:18.320 } 00:14:18.320 ] 00:14:18.320 }' 00:14:18.320 [2024-07-24 19:52:46.935822] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:18.320 [2024-07-24 19:52:46.936554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72918 ] 00:14:18.578 [2024-07-24 19:52:47.086638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.578 [2024-07-24 19:52:47.199792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.836 [2024-07-24 19:52:47.334439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:18.836 [2024-07-24 19:52:47.373172] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:18.836 [2024-07-24 19:52:47.373978] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:19.403 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.403 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:19.403 19:52:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:19.661 Running I/O for 10 seconds... 00:14:29.642 00:14:29.642 Latency(us) 00:14:29.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.642 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:29.642 Verification LBA range: start 0x0 length 0x2000 00:14:29.642 TLSTESTn1 : 10.02 3902.74 15.25 0.00 0.00 32734.49 6732.33 41228.10 00:14:29.642 =================================================================================================================== 00:14:29.642 Total : 3902.74 15.25 0.00 0.00 32734.49 6732.33 41228.10 00:14:29.642 0 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 72918 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72918 ']' 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72918 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72918 00:14:29.642 killing process with pid 72918 00:14:29.642 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.642 00:14:29.642 Latency(us) 00:14:29.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.642 =================================================================================================================== 00:14:29.642 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72918' 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72918 00:14:29.642 [2024-07-24 19:52:58.164337] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:29.642 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72918 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 72880 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 72880 ']' 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 72880 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72880 00:14:29.901 killing process with pid 72880 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72880' 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 72880 00:14:29.901 [2024-07-24 19:52:58.412480] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:29.901 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 72880 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73057 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73057 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73057 ']' 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.159 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.160 19:52:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.160 [2024-07-24 19:52:58.696948] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:30.160 [2024-07-24 19:52:58.697037] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.418 [2024-07-24 19:52:58.833942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.418 [2024-07-24 19:52:58.945905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.418 [2024-07-24 19:52:58.945964] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.418 [2024-07-24 19:52:58.945977] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.418 [2024-07-24 19:52:58.945985] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.418 [2024-07-24 19:52:58.945993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.418 [2024-07-24 19:52:58.946033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.418 [2024-07-24 19:52:58.998483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.MvmRCQKYCT 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.MvmRCQKYCT 00:14:31.353 19:52:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:31.611 [2024-07-24 19:53:00.023310] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.611 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:31.870 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:32.128 [2024-07-24 19:53:00.567441] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:32.128 [2024-07-24 19:53:00.567672] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.128 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:32.387 malloc0 00:14:32.387 19:53:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:32.646 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.MvmRCQKYCT 00:14:32.905 [2024-07-24 19:53:01.338726] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:32.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=73112 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 73112 /var/tmp/bdevperf.sock 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73112 ']' 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.905 19:53:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.905 [2024-07-24 19:53:01.411561] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:32.905 [2024-07-24 19:53:01.411661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73112 ] 00:14:32.905 [2024-07-24 19:53:01.549050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.163 [2024-07-24 19:53:01.676462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.163 [2024-07-24 19:53:01.738308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:34.097 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.097 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:34.097 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MvmRCQKYCT 00:14:34.097 19:53:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:34.356 [2024-07-24 19:53:02.975773] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.615 nvme0n1 00:14:34.615 19:53:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:34.615 Running I/O for 1 seconds... 00:14:35.547 00:14:35.547 Latency(us) 00:14:35.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.547 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:35.547 Verification LBA range: start 0x0 length 0x2000 00:14:35.547 nvme0n1 : 1.02 3890.26 15.20 0.00 0.00 32554.38 6911.07 25141.99 00:14:35.547 =================================================================================================================== 00:14:35.547 Total : 3890.26 15.20 0.00 0.00 32554.38 6911.07 25141.99 00:14:35.547 0 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 73112 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73112 ']' 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73112 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73112 00:14:35.805 killing process with pid 73112 00:14:35.805 Received shutdown signal, test time was about 1.000000 seconds 00:14:35.805 00:14:35.805 Latency(us) 00:14:35.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.805 =================================================================================================================== 00:14:35.805 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73112' 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73112 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73112 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 73057 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73057 ']' 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73057 00:14:35.805 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:35.806 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:35.806 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73057 00:14:36.064 killing process with pid 73057 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73057' 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73057 00:14:36.064 [2024-07-24 19:53:04.490055] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73057 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:36.064 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73163 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73163 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73163 ']' 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.323 19:53:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.323 [2024-07-24 19:53:04.784225] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:36.323 [2024-07-24 19:53:04.784538] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.323 [2024-07-24 19:53:04.921826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.581 [2024-07-24 19:53:05.037773] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.581 [2024-07-24 19:53:05.038051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.581 [2024-07-24 19:53:05.038211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.581 [2024-07-24 19:53:05.038404] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.581 [2024-07-24 19:53:05.038506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.581 [2024-07-24 19:53:05.038662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.581 [2024-07-24 19:53:05.092300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:37.146 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.146 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:37.146 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:37.146 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:37.146 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.404 [2024-07-24 19:53:05.822001] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.404 malloc0 00:14:37.404 [2024-07-24 19:53:05.853155] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.404 [2024-07-24 19:53:05.853369] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=73195 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 73195 /var/tmp/bdevperf.sock 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73195 ']' 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.404 19:53:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.404 [2024-07-24 19:53:05.937069] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:37.404 [2024-07-24 19:53:05.937412] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73195 ] 00:14:37.661 [2024-07-24 19:53:06.074541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.661 [2024-07-24 19:53:06.188832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.661 [2024-07-24 19:53:06.244884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:38.595 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.595 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:38.595 19:53:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MvmRCQKYCT 00:14:38.595 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:38.854 [2024-07-24 19:53:07.390642] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.854 nvme0n1 00:14:38.854 19:53:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:39.112 Running I/O for 1 seconds... 00:14:40.047 00:14:40.047 Latency(us) 00:14:40.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.048 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:40.048 Verification LBA range: start 0x0 length 0x2000 00:14:40.048 nvme0n1 : 1.02 3758.35 14.68 0.00 0.00 33662.39 10128.29 23712.12 00:14:40.048 =================================================================================================================== 00:14:40.048 Total : 3758.35 14.68 0.00 0.00 33662.39 10128.29 23712.12 00:14:40.048 0 00:14:40.048 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:14:40.048 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.048 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.307 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.307 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:14:40.307 "subsystems": [ 00:14:40.307 { 00:14:40.307 "subsystem": "keyring", 00:14:40.307 "config": [ 00:14:40.307 { 00:14:40.307 "method": "keyring_file_add_key", 00:14:40.307 "params": { 00:14:40.307 "name": "key0", 00:14:40.307 "path": "/tmp/tmp.MvmRCQKYCT" 00:14:40.307 } 00:14:40.307 } 00:14:40.307 ] 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "subsystem": "iobuf", 00:14:40.307 "config": [ 00:14:40.307 { 00:14:40.307 "method": "iobuf_set_options", 00:14:40.307 "params": { 00:14:40.307 "small_pool_count": 8192, 00:14:40.307 "large_pool_count": 1024, 00:14:40.307 "small_bufsize": 8192, 00:14:40.307 "large_bufsize": 135168 00:14:40.307 } 00:14:40.307 } 00:14:40.307 ] 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "subsystem": "sock", 00:14:40.307 "config": [ 00:14:40.307 { 00:14:40.307 "method": "sock_set_default_impl", 00:14:40.307 "params": { 00:14:40.307 "impl_name": "uring" 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "sock_impl_set_options", 00:14:40.307 "params": { 00:14:40.307 "impl_name": "ssl", 00:14:40.307 "recv_buf_size": 4096, 00:14:40.307 "send_buf_size": 4096, 00:14:40.307 "enable_recv_pipe": true, 00:14:40.307 "enable_quickack": false, 00:14:40.307 "enable_placement_id": 0, 00:14:40.307 "enable_zerocopy_send_server": true, 00:14:40.307 "enable_zerocopy_send_client": false, 00:14:40.307 "zerocopy_threshold": 0, 00:14:40.307 "tls_version": 0, 00:14:40.307 "enable_ktls": false 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "sock_impl_set_options", 00:14:40.307 "params": { 00:14:40.307 "impl_name": "posix", 00:14:40.307 "recv_buf_size": 2097152, 00:14:40.307 "send_buf_size": 2097152, 00:14:40.307 "enable_recv_pipe": true, 00:14:40.307 "enable_quickack": false, 00:14:40.307 "enable_placement_id": 0, 00:14:40.307 "enable_zerocopy_send_server": true, 00:14:40.307 "enable_zerocopy_send_client": false, 00:14:40.307 "zerocopy_threshold": 0, 00:14:40.307 "tls_version": 0, 00:14:40.307 "enable_ktls": false 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "sock_impl_set_options", 00:14:40.307 "params": { 00:14:40.307 "impl_name": "uring", 00:14:40.307 "recv_buf_size": 2097152, 00:14:40.307 "send_buf_size": 2097152, 00:14:40.307 "enable_recv_pipe": true, 00:14:40.307 "enable_quickack": false, 00:14:40.307 "enable_placement_id": 0, 00:14:40.307 "enable_zerocopy_send_server": false, 00:14:40.307 "enable_zerocopy_send_client": false, 00:14:40.307 "zerocopy_threshold": 0, 00:14:40.307 "tls_version": 0, 00:14:40.307 "enable_ktls": false 00:14:40.307 } 00:14:40.307 } 00:14:40.307 ] 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "subsystem": "vmd", 00:14:40.307 "config": [] 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "subsystem": "accel", 00:14:40.307 "config": [ 00:14:40.307 { 00:14:40.307 "method": "accel_set_options", 00:14:40.307 "params": { 00:14:40.307 "small_cache_size": 128, 00:14:40.307 "large_cache_size": 16, 00:14:40.307 "task_count": 2048, 00:14:40.307 "sequence_count": 2048, 00:14:40.307 "buf_count": 2048 00:14:40.307 } 00:14:40.307 } 00:14:40.307 ] 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "subsystem": "bdev", 00:14:40.307 "config": [ 00:14:40.307 { 00:14:40.307 "method": "bdev_set_options", 00:14:40.307 "params": { 00:14:40.307 "bdev_io_pool_size": 65535, 00:14:40.307 "bdev_io_cache_size": 256, 00:14:40.307 "bdev_auto_examine": true, 00:14:40.307 "iobuf_small_cache_size": 128, 00:14:40.307 "iobuf_large_cache_size": 16 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "bdev_raid_set_options", 00:14:40.307 "params": { 00:14:40.307 "process_window_size_kb": 1024, 00:14:40.307 "process_max_bandwidth_mb_sec": 0 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "bdev_iscsi_set_options", 00:14:40.307 "params": { 00:14:40.307 "timeout_sec": 30 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "bdev_nvme_set_options", 00:14:40.307 "params": { 00:14:40.307 "action_on_timeout": "none", 00:14:40.307 "timeout_us": 0, 00:14:40.307 "timeout_admin_us": 0, 00:14:40.307 "keep_alive_timeout_ms": 10000, 00:14:40.307 "arbitration_burst": 0, 00:14:40.307 "low_priority_weight": 0, 00:14:40.307 "medium_priority_weight": 0, 00:14:40.307 "high_priority_weight": 0, 00:14:40.307 "nvme_adminq_poll_period_us": 10000, 00:14:40.307 "nvme_ioq_poll_period_us": 0, 00:14:40.307 "io_queue_requests": 0, 00:14:40.307 "delay_cmd_submit": true, 00:14:40.307 "transport_retry_count": 4, 00:14:40.307 "bdev_retry_count": 3, 00:14:40.307 "transport_ack_timeout": 0, 00:14:40.307 "ctrlr_loss_timeout_sec": 0, 00:14:40.307 "reconnect_delay_sec": 0, 00:14:40.307 "fast_io_fail_timeout_sec": 0, 00:14:40.307 "disable_auto_failback": false, 00:14:40.307 "generate_uuids": false, 00:14:40.307 "transport_tos": 0, 00:14:40.307 "nvme_error_stat": false, 00:14:40.307 "rdma_srq_size": 0, 00:14:40.307 "io_path_stat": false, 00:14:40.307 "allow_accel_sequence": false, 00:14:40.307 "rdma_max_cq_size": 0, 00:14:40.307 "rdma_cm_event_timeout_ms": 0, 00:14:40.307 "dhchap_digests": [ 00:14:40.307 "sha256", 00:14:40.307 "sha384", 00:14:40.307 "sha512" 00:14:40.307 ], 00:14:40.307 "dhchap_dhgroups": [ 00:14:40.307 "null", 00:14:40.307 "ffdhe2048", 00:14:40.307 "ffdhe3072", 00:14:40.307 "ffdhe4096", 00:14:40.307 "ffdhe6144", 00:14:40.307 "ffdhe8192" 00:14:40.307 ] 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "bdev_nvme_set_hotplug", 00:14:40.307 "params": { 00:14:40.307 "period_us": 100000, 00:14:40.307 "enable": false 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "bdev_malloc_create", 00:14:40.307 "params": { 00:14:40.307 "name": "malloc0", 00:14:40.307 "num_blocks": 8192, 00:14:40.307 "block_size": 4096, 00:14:40.307 "physical_block_size": 4096, 00:14:40.307 "uuid": "957e74a9-c780-4a75-8701-728f0c7ae6da", 00:14:40.307 "optimal_io_boundary": 0, 00:14:40.307 "md_size": 0, 00:14:40.307 "dif_type": 0, 00:14:40.307 "dif_is_head_of_md": false, 00:14:40.307 "dif_pi_format": 0 00:14:40.307 } 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "method": "bdev_wait_for_examine" 00:14:40.307 } 00:14:40.307 ] 00:14:40.307 }, 00:14:40.307 { 00:14:40.307 "subsystem": "nbd", 00:14:40.307 "config": [] 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "subsystem": "scheduler", 00:14:40.308 "config": [ 00:14:40.308 { 00:14:40.308 "method": "framework_set_scheduler", 00:14:40.308 "params": { 00:14:40.308 "name": "static" 00:14:40.308 } 00:14:40.308 } 00:14:40.308 ] 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "subsystem": "nvmf", 00:14:40.308 "config": [ 00:14:40.308 { 00:14:40.308 "method": "nvmf_set_config", 00:14:40.308 "params": { 00:14:40.308 "discovery_filter": "match_any", 00:14:40.308 "admin_cmd_passthru": { 00:14:40.308 "identify_ctrlr": false 00:14:40.308 } 00:14:40.308 } 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "method": "nvmf_set_max_subsystems", 00:14:40.308 "params": { 00:14:40.308 "max_subsystems": 1024 00:14:40.308 } 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "method": "nvmf_set_crdt", 00:14:40.308 "params": { 00:14:40.308 "crdt1": 0, 00:14:40.308 "crdt2": 0, 00:14:40.308 "crdt3": 0 00:14:40.308 } 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "method": "nvmf_create_transport", 00:14:40.308 "params": { 00:14:40.308 "trtype": "TCP", 00:14:40.308 "max_queue_depth": 128, 00:14:40.308 "max_io_qpairs_per_ctrlr": 127, 00:14:40.308 "in_capsule_data_size": 4096, 00:14:40.308 "max_io_size": 131072, 00:14:40.308 "io_unit_size": 131072, 00:14:40.308 "max_aq_depth": 128, 00:14:40.308 "num_shared_buffers": 511, 00:14:40.308 "buf_cache_size": 4294967295, 00:14:40.308 "dif_insert_or_strip": false, 00:14:40.308 "zcopy": false, 00:14:40.308 "c2h_success": false, 00:14:40.308 "sock_priority": 0, 00:14:40.308 "abort_timeout_sec": 1, 00:14:40.308 "ack_timeout": 0, 00:14:40.308 "data_wr_pool_size": 0 00:14:40.308 } 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "method": "nvmf_create_subsystem", 00:14:40.308 "params": { 00:14:40.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.308 "allow_any_host": false, 00:14:40.308 "serial_number": "00000000000000000000", 00:14:40.308 "model_number": "SPDK bdev Controller", 00:14:40.308 "max_namespaces": 32, 00:14:40.308 "min_cntlid": 1, 00:14:40.308 "max_cntlid": 65519, 00:14:40.308 "ana_reporting": false 00:14:40.308 } 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "method": "nvmf_subsystem_add_host", 00:14:40.308 "params": { 00:14:40.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.308 "host": "nqn.2016-06.io.spdk:host1", 00:14:40.308 "psk": "key0" 00:14:40.308 } 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "method": "nvmf_subsystem_add_ns", 00:14:40.308 "params": { 00:14:40.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.308 "namespace": { 00:14:40.308 "nsid": 1, 00:14:40.308 "bdev_name": "malloc0", 00:14:40.308 "nguid": "957E74A9C7804A758701728F0C7AE6DA", 00:14:40.308 "uuid": "957e74a9-c780-4a75-8701-728f0c7ae6da", 00:14:40.308 "no_auto_visible": false 00:14:40.308 } 00:14:40.308 } 00:14:40.308 }, 00:14:40.308 { 00:14:40.308 "method": "nvmf_subsystem_add_listener", 00:14:40.308 "params": { 00:14:40.308 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.308 "listen_address": { 00:14:40.308 "trtype": "TCP", 00:14:40.308 "adrfam": "IPv4", 00:14:40.308 "traddr": "10.0.0.2", 00:14:40.308 "trsvcid": "4420" 00:14:40.308 }, 00:14:40.308 "secure_channel": false, 00:14:40.308 "sock_impl": "ssl" 00:14:40.308 } 00:14:40.308 } 00:14:40.308 ] 00:14:40.308 } 00:14:40.308 ] 00:14:40.308 }' 00:14:40.308 19:53:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:40.567 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:14:40.567 "subsystems": [ 00:14:40.567 { 00:14:40.567 "subsystem": "keyring", 00:14:40.567 "config": [ 00:14:40.567 { 00:14:40.567 "method": "keyring_file_add_key", 00:14:40.567 "params": { 00:14:40.567 "name": "key0", 00:14:40.567 "path": "/tmp/tmp.MvmRCQKYCT" 00:14:40.567 } 00:14:40.567 } 00:14:40.567 ] 00:14:40.567 }, 00:14:40.567 { 00:14:40.567 "subsystem": "iobuf", 00:14:40.567 "config": [ 00:14:40.567 { 00:14:40.567 "method": "iobuf_set_options", 00:14:40.567 "params": { 00:14:40.567 "small_pool_count": 8192, 00:14:40.567 "large_pool_count": 1024, 00:14:40.567 "small_bufsize": 8192, 00:14:40.567 "large_bufsize": 135168 00:14:40.567 } 00:14:40.567 } 00:14:40.567 ] 00:14:40.567 }, 00:14:40.567 { 00:14:40.567 "subsystem": "sock", 00:14:40.567 "config": [ 00:14:40.567 { 00:14:40.567 "method": "sock_set_default_impl", 00:14:40.567 "params": { 00:14:40.567 "impl_name": "uring" 00:14:40.567 } 00:14:40.567 }, 00:14:40.567 { 00:14:40.567 "method": "sock_impl_set_options", 00:14:40.567 "params": { 00:14:40.567 "impl_name": "ssl", 00:14:40.567 "recv_buf_size": 4096, 00:14:40.567 "send_buf_size": 4096, 00:14:40.567 "enable_recv_pipe": true, 00:14:40.567 "enable_quickack": false, 00:14:40.567 "enable_placement_id": 0, 00:14:40.567 "enable_zerocopy_send_server": true, 00:14:40.567 "enable_zerocopy_send_client": false, 00:14:40.567 "zerocopy_threshold": 0, 00:14:40.567 "tls_version": 0, 00:14:40.567 "enable_ktls": false 00:14:40.567 } 00:14:40.567 }, 00:14:40.567 { 00:14:40.567 "method": "sock_impl_set_options", 00:14:40.567 "params": { 00:14:40.567 "impl_name": "posix", 00:14:40.567 "recv_buf_size": 2097152, 00:14:40.567 "send_buf_size": 2097152, 00:14:40.567 "enable_recv_pipe": true, 00:14:40.567 "enable_quickack": false, 00:14:40.567 "enable_placement_id": 0, 00:14:40.567 "enable_zerocopy_send_server": true, 00:14:40.567 "enable_zerocopy_send_client": false, 00:14:40.567 "zerocopy_threshold": 0, 00:14:40.567 "tls_version": 0, 00:14:40.567 "enable_ktls": false 00:14:40.567 } 00:14:40.567 }, 00:14:40.567 { 00:14:40.567 "method": "sock_impl_set_options", 00:14:40.567 "params": { 00:14:40.567 "impl_name": "uring", 00:14:40.567 "recv_buf_size": 2097152, 00:14:40.567 "send_buf_size": 2097152, 00:14:40.567 "enable_recv_pipe": true, 00:14:40.567 "enable_quickack": false, 00:14:40.567 "enable_placement_id": 0, 00:14:40.567 "enable_zerocopy_send_server": false, 00:14:40.567 "enable_zerocopy_send_client": false, 00:14:40.567 "zerocopy_threshold": 0, 00:14:40.567 "tls_version": 0, 00:14:40.567 "enable_ktls": false 00:14:40.567 } 00:14:40.567 } 00:14:40.567 ] 00:14:40.567 }, 00:14:40.567 { 00:14:40.567 "subsystem": "vmd", 00:14:40.567 "config": [] 00:14:40.567 }, 00:14:40.567 { 00:14:40.567 "subsystem": "accel", 00:14:40.567 "config": [ 00:14:40.567 { 00:14:40.567 "method": "accel_set_options", 00:14:40.567 "params": { 00:14:40.567 "small_cache_size": 128, 00:14:40.567 "large_cache_size": 16, 00:14:40.567 "task_count": 2048, 00:14:40.567 "sequence_count": 2048, 00:14:40.567 "buf_count": 2048 00:14:40.567 } 00:14:40.567 } 00:14:40.567 ] 00:14:40.567 }, 00:14:40.567 { 00:14:40.568 "subsystem": "bdev", 00:14:40.568 "config": [ 00:14:40.568 { 00:14:40.568 "method": "bdev_set_options", 00:14:40.568 "params": { 00:14:40.568 "bdev_io_pool_size": 65535, 00:14:40.568 "bdev_io_cache_size": 256, 00:14:40.568 "bdev_auto_examine": true, 00:14:40.568 "iobuf_small_cache_size": 128, 00:14:40.568 "iobuf_large_cache_size": 16 00:14:40.568 } 00:14:40.568 }, 00:14:40.568 { 00:14:40.568 "method": "bdev_raid_set_options", 00:14:40.568 "params": { 00:14:40.568 "process_window_size_kb": 1024, 00:14:40.568 "process_max_bandwidth_mb_sec": 0 00:14:40.568 } 00:14:40.568 }, 00:14:40.568 { 00:14:40.568 "method": "bdev_iscsi_set_options", 00:14:40.568 "params": { 00:14:40.568 "timeout_sec": 30 00:14:40.568 } 00:14:40.568 }, 00:14:40.568 { 00:14:40.568 "method": "bdev_nvme_set_options", 00:14:40.568 "params": { 00:14:40.568 "action_on_timeout": "none", 00:14:40.568 "timeout_us": 0, 00:14:40.568 "timeout_admin_us": 0, 00:14:40.568 "keep_alive_timeout_ms": 10000, 00:14:40.568 "arbitration_burst": 0, 00:14:40.568 "low_priority_weight": 0, 00:14:40.568 "medium_priority_weight": 0, 00:14:40.568 "high_priority_weight": 0, 00:14:40.568 "nvme_adminq_poll_period_us": 10000, 00:14:40.568 "nvme_ioq_poll_period_us": 0, 00:14:40.568 "io_queue_requests": 512, 00:14:40.568 "delay_cmd_submit": true, 00:14:40.568 "transport_retry_count": 4, 00:14:40.568 "bdev_retry_count": 3, 00:14:40.568 "transport_ack_timeout": 0, 00:14:40.568 "ctrlr_loss_timeout_sec": 0, 00:14:40.568 "reconnect_delay_sec": 0, 00:14:40.568 "fast_io_fail_timeout_sec": 0, 00:14:40.568 "disable_auto_failback": false, 00:14:40.568 "generate_uuids": false, 00:14:40.568 "transport_tos": 0, 00:14:40.568 "nvme_error_stat": false, 00:14:40.568 "rdma_srq_size": 0, 00:14:40.568 "io_path_stat": false, 00:14:40.568 "allow_accel_sequence": false, 00:14:40.568 "rdma_max_cq_size": 0, 00:14:40.568 "rdma_cm_event_timeout_ms": 0, 00:14:40.568 "dhchap_digests": [ 00:14:40.568 "sha256", 00:14:40.568 "sha384", 00:14:40.568 "sha512" 00:14:40.568 ], 00:14:40.568 "dhchap_dhgroups": [ 00:14:40.568 "null", 00:14:40.568 "ffdhe2048", 00:14:40.568 "ffdhe3072", 00:14:40.568 "ffdhe4096", 00:14:40.568 "ffdhe6144", 00:14:40.568 "ffdhe8192" 00:14:40.568 ] 00:14:40.568 } 00:14:40.568 }, 00:14:40.568 { 00:14:40.568 "method": "bdev_nvme_attach_controller", 00:14:40.568 "params": { 00:14:40.568 "name": "nvme0", 00:14:40.568 "trtype": "TCP", 00:14:40.568 "adrfam": "IPv4", 00:14:40.568 "traddr": "10.0.0.2", 00:14:40.568 "trsvcid": "4420", 00:14:40.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.568 "prchk_reftag": false, 00:14:40.568 "prchk_guard": false, 00:14:40.568 "ctrlr_loss_timeout_sec": 0, 00:14:40.568 "reconnect_delay_sec": 0, 00:14:40.568 "fast_io_fail_timeout_sec": 0, 00:14:40.568 "psk": "key0", 00:14:40.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.568 "hdgst": false, 00:14:40.568 "ddgst": false 00:14:40.568 } 00:14:40.568 }, 00:14:40.568 { 00:14:40.568 "method": "bdev_nvme_set_hotplug", 00:14:40.568 "params": { 00:14:40.568 "period_us": 100000, 00:14:40.568 "enable": false 00:14:40.568 } 00:14:40.568 }, 00:14:40.568 { 00:14:40.568 "method": "bdev_enable_histogram", 00:14:40.568 "params": { 00:14:40.568 "name": "nvme0n1", 00:14:40.568 "enable": true 00:14:40.568 } 00:14:40.568 }, 00:14:40.568 { 00:14:40.568 "method": "bdev_wait_for_examine" 00:14:40.568 } 00:14:40.568 ] 00:14:40.568 }, 00:14:40.568 { 00:14:40.568 "subsystem": "nbd", 00:14:40.568 "config": [] 00:14:40.568 } 00:14:40.568 ] 00:14:40.568 }' 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 73195 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73195 ']' 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73195 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73195 00:14:40.568 killing process with pid 73195 00:14:40.568 Received shutdown signal, test time was about 1.000000 seconds 00:14:40.568 00:14:40.568 Latency(us) 00:14:40.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.568 =================================================================================================================== 00:14:40.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73195' 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73195 00:14:40.568 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73195 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 73163 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73163 ']' 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73163 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73163 00:14:40.827 killing process with pid 73163 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73163' 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73163 00:14:40.827 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73163 00:14:41.086 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:14:41.086 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.086 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:14:41.086 "subsystems": [ 00:14:41.086 { 00:14:41.086 "subsystem": "keyring", 00:14:41.086 "config": [ 00:14:41.086 { 00:14:41.086 "method": "keyring_file_add_key", 00:14:41.086 "params": { 00:14:41.086 "name": "key0", 00:14:41.086 "path": "/tmp/tmp.MvmRCQKYCT" 00:14:41.086 } 00:14:41.086 } 00:14:41.086 ] 00:14:41.086 }, 00:14:41.086 { 00:14:41.086 "subsystem": "iobuf", 00:14:41.086 "config": [ 00:14:41.086 { 00:14:41.086 "method": "iobuf_set_options", 00:14:41.086 "params": { 00:14:41.086 "small_pool_count": 8192, 00:14:41.086 "large_pool_count": 1024, 00:14:41.086 "small_bufsize": 8192, 00:14:41.086 "large_bufsize": 135168 00:14:41.086 } 00:14:41.086 } 00:14:41.086 ] 00:14:41.086 }, 00:14:41.086 { 00:14:41.086 "subsystem": "sock", 00:14:41.086 "config": [ 00:14:41.086 { 00:14:41.086 "method": "sock_set_default_impl", 00:14:41.086 "params": { 00:14:41.086 "impl_name": "uring" 00:14:41.086 } 00:14:41.086 }, 00:14:41.086 { 00:14:41.086 "method": "sock_impl_set_options", 00:14:41.086 "params": { 00:14:41.086 "impl_name": "ssl", 00:14:41.086 "recv_buf_size": 4096, 00:14:41.086 "send_buf_size": 4096, 00:14:41.086 "enable_recv_pipe": true, 00:14:41.086 "enable_quickack": false, 00:14:41.086 "enable_placement_id": 0, 00:14:41.086 "enable_zerocopy_send_server": true, 00:14:41.086 "enable_zerocopy_send_client": false, 00:14:41.086 "zerocopy_threshold": 0, 00:14:41.086 "tls_version": 0, 00:14:41.086 "enable_ktls": false 00:14:41.086 } 00:14:41.086 }, 00:14:41.086 { 00:14:41.086 "method": "sock_impl_set_options", 00:14:41.086 "params": { 00:14:41.086 "impl_name": "posix", 00:14:41.086 "recv_buf_size": 2097152, 00:14:41.086 "send_buf_size": 2097152, 00:14:41.086 "enable_recv_pipe": true, 00:14:41.086 "enable_quickack": false, 00:14:41.086 "enable_placement_id": 0, 00:14:41.086 "enable_zerocopy_send_server": true, 00:14:41.086 "enable_zerocopy_send_client": false, 00:14:41.086 "zerocopy_threshold": 0, 00:14:41.086 "tls_version": 0, 00:14:41.086 "enable_ktls": false 00:14:41.086 } 00:14:41.086 }, 00:14:41.086 { 00:14:41.086 "method": "sock_impl_set_options", 00:14:41.086 "params": { 00:14:41.086 "impl_name": "uring", 00:14:41.086 "recv_buf_size": 2097152, 00:14:41.086 "send_buf_size": 2097152, 00:14:41.086 "enable_recv_pipe": true, 00:14:41.086 "enable_quickack": false, 00:14:41.086 "enable_placement_id": 0, 00:14:41.087 "enable_zerocopy_send_server": false, 00:14:41.087 "enable_zerocopy_send_client": false, 00:14:41.087 "zerocopy_threshold": 0, 00:14:41.087 "tls_version": 0, 00:14:41.087 "enable_ktls": false 00:14:41.087 } 00:14:41.087 } 00:14:41.087 ] 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "subsystem": "vmd", 00:14:41.087 "config": [] 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "subsystem": "accel", 00:14:41.087 "config": [ 00:14:41.087 { 00:14:41.087 "method": "accel_set_options", 00:14:41.087 "params": { 00:14:41.087 "small_cache_size": 128, 00:14:41.087 "large_cache_size": 16, 00:14:41.087 "task_count": 2048, 00:14:41.087 "sequence_count": 2048, 00:14:41.087 "buf_count": 2048 00:14:41.087 } 00:14:41.087 } 00:14:41.087 ] 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "subsystem": "bdev", 00:14:41.087 "config": [ 00:14:41.087 { 00:14:41.087 "method": "bdev_set_options", 00:14:41.087 "params": { 00:14:41.087 "bdev_io_pool_size": 65535, 00:14:41.087 "bdev_io_cache_size": 256, 00:14:41.087 "bdev_auto_examine": true, 00:14:41.087 "iobuf_small_cache_size": 128, 00:14:41.087 "iobuf_large_cache_size": 16 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "bdev_raid_set_options", 00:14:41.087 "params": { 00:14:41.087 "process_window_size_kb": 1024, 00:14:41.087 "process_max_bandwidth_mb_sec": 0 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "bdev_iscsi_set_options", 00:14:41.087 "params": { 00:14:41.087 "timeout_sec": 30 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "bdev_nvme_set_options", 00:14:41.087 "params": { 00:14:41.087 "action_on_timeout": "none", 00:14:41.087 "timeout_us": 0, 00:14:41.087 "timeout_admin_us": 0, 00:14:41.087 "keep_alive_timeout_ms": 10000, 00:14:41.087 "arbitration_burst": 0, 00:14:41.087 "low_priority_weight": 0, 00:14:41.087 "medium_priority_weight": 0, 00:14:41.087 "high_priority_weight": 0, 00:14:41.087 "nvme_adminq_poll_period_us": 10000, 00:14:41.087 "nvme_ioq_poll_period_us": 0, 00:14:41.087 "io_queue_requests": 0, 00:14:41.087 "delay_cmd_submit": true, 00:14:41.087 "transport_retry_count": 4, 00:14:41.087 "bdev_retry_count": 3, 00:14:41.087 "transport_ack_timeout": 0, 00:14:41.087 "ctrlr_loss_timeout_sec": 0, 00:14:41.087 "reconnect_delay_sec": 0, 00:14:41.087 "fast_io_fail_timeout_sec": 0, 00:14:41.087 "disable_auto_failback": false, 00:14:41.087 "generate_uuids": false, 00:14:41.087 "transport_tos": 0, 00:14:41.087 "nvme_error_stat": false, 00:14:41.087 "rdma_srq_size": 0, 00:14:41.087 "io_path_stat": false, 00:14:41.087 "allow_accel_sequence": false, 00:14:41.087 "rdma_max_cq_size": 0, 00:14:41.087 "rdma_cm_event_timeout_ms": 0, 00:14:41.087 "dhchap_digests": [ 00:14:41.087 "sha256", 00:14:41.087 "sha384", 00:14:41.087 "sha512" 00:14:41.087 ], 00:14:41.087 "dhchap_dhgroups": [ 00:14:41.087 "null", 00:14:41.087 "ffdhe2048", 00:14:41.087 "ffdhe3072", 00:14:41.087 "ffdhe4096", 00:14:41.087 "ffdhe6144", 00:14:41.087 "ffdhe8192" 00:14:41.087 ] 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "bdev_nvme_set_hotplug", 00:14:41.087 "params": { 00:14:41.087 "period_us": 100000, 00:14:41.087 "enable": false 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "bdev_malloc_create", 00:14:41.087 "params": { 00:14:41.087 "name": "malloc0", 00:14:41.087 "num_blocks": 8192, 00:14:41.087 "block_size": 4096, 00:14:41.087 "physical_block_size": 4096, 00:14:41.087 "uuid": "957e74a9-c780-4a75-8701-728f0c7ae6da", 00:14:41.087 "optimal_io_boundary": 0, 00:14:41.087 "md_size": 0, 00:14:41.087 "dif_type": 0, 00:14:41.087 "dif_is_head_of_md": false, 00:14:41.087 "dif_pi_format": 0 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "bdev_wait_for_examine" 00:14:41.087 } 00:14:41.087 ] 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "subsystem": "nbd", 00:14:41.087 "config": [] 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "subsystem": "scheduler", 00:14:41.087 "config": [ 00:14:41.087 { 00:14:41.087 "method": "framework_set_scheduler", 00:14:41.087 "params": { 00:14:41.087 "name": "static" 00:14:41.087 } 00:14:41.087 } 00:14:41.087 ] 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "subsystem": "nvmf", 00:14:41.087 "config": [ 00:14:41.087 { 00:14:41.087 "method": "nvmf_set_config", 00:14:41.087 "params": { 00:14:41.087 "discovery_filter": "match_any", 00:14:41.087 "admin_cmd_passthru": { 00:14:41.087 "identify_ctrlr": false 00:14:41.087 } 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "nvmf_set_max_subsystems", 00:14:41.087 "params": { 00:14:41.087 "max_subsystems": 1024 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "nvmf_set_crdt", 00:14:41.087 "params": { 00:14:41.087 "crdt1": 0, 00:14:41.087 "crdt2": 0, 00:14:41.087 "crdt3": 0 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "nvmf_create_transport", 00:14:41.087 "params": { 00:14:41.087 "trtype": "TCP", 00:14:41.087 "max_queue_depth": 128, 00:14:41.087 "max_io_qpairs_per_ctrlr": 127, 00:14:41.087 "in_capsule_data_size": 4096, 00:14:41.087 "max_io_size": 131072, 00:14:41.087 "io_unit_size": 131072, 00:14:41.087 "max_aq_depth": 128, 00:14:41.087 "num_shared_buffers": 511, 00:14:41.087 "buf_cache_size": 4294967295, 00:14:41.087 "dif_insert_or_strip": false, 00:14:41.087 "zcopy": false, 00:14:41.087 "c2h_success": false, 00:14:41.087 "sock_priority": 0, 00:14:41.087 "abort_timeout_sec": 1, 00:14:41.087 "ack_timeout": 0, 00:14:41.087 "data_wr_pool_size": 0 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "nvmf_create_subsystem", 00:14:41.087 "params": { 00:14:41.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.087 "allow_any_host": false, 00:14:41.087 "serial_number": "00000000000000000000", 00:14:41.087 "model_number": "SPDK bdev Controller", 00:14:41.087 "max_namespaces": 32, 00:14:41.087 "min_cntlid": 1, 00:14:41.087 "max_cntlid": 65519, 00:14:41.087 "ana_reporting": false 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "nvmf_subsystem_add_host", 00:14:41.087 "params": { 00:14:41.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.087 "host": "nqn.2016-06.io.spdk:host1", 00:14:41.087 "psk": "key0" 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "nvmf_subsystem_add_ns", 00:14:41.087 "params": { 00:14:41.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.087 "namespace": { 00:14:41.087 "nsid": 1, 00:14:41.087 "bdev_name": "malloc0", 00:14:41.087 "nguid": "957E74A9C7804A758701728F0C7AE6DA", 00:14:41.087 "uuid": "957e74a9-c780-4a75-8701-728f0c7ae6da", 00:14:41.087 "no_auto_visible": false 00:14:41.087 } 00:14:41.087 } 00:14:41.087 }, 00:14:41.087 { 00:14:41.087 "method": "nvmf_subsystem_add_listener", 00:14:41.087 "params": { 00:14:41.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.087 "listen_address": { 00:14:41.087 "trtype": "TCP", 00:14:41.087 "adrfam": "IPv4", 00:14:41.087 "traddr": "10.0.0.2", 00:14:41.087 "trsvcid": "4420" 00:14:41.087 }, 00:14:41.087 "secure_channel": false, 00:14:41.087 "sock_impl": "ssl" 00:14:41.087 } 00:14:41.087 } 00:14:41.087 ] 00:14:41.087 } 00:14:41.087 ] 00:14:41.087 }' 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73256 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73256 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73256 ']' 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.087 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.088 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.088 19:53:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.088 [2024-07-24 19:53:09.693770] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:41.088 [2024-07-24 19:53:09.693901] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.346 [2024-07-24 19:53:09.837679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.346 [2024-07-24 19:53:09.950147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.346 [2024-07-24 19:53:09.950209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.346 [2024-07-24 19:53:09.950222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.346 [2024-07-24 19:53:09.950230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.346 [2024-07-24 19:53:09.950238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.346 [2024-07-24 19:53:09.950323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.604 [2024-07-24 19:53:10.119010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.604 [2024-07-24 19:53:10.198196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.604 [2024-07-24 19:53:10.230155] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.604 [2024-07-24 19:53:10.238919] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=73288 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 73288 /var/tmp/bdevperf.sock 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 73288 ']' 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:42.171 19:53:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:14:42.171 "subsystems": [ 00:14:42.171 { 00:14:42.171 "subsystem": "keyring", 00:14:42.171 "config": [ 00:14:42.171 { 00:14:42.171 "method": "keyring_file_add_key", 00:14:42.171 "params": { 00:14:42.171 "name": "key0", 00:14:42.171 "path": "/tmp/tmp.MvmRCQKYCT" 00:14:42.171 } 00:14:42.171 } 00:14:42.171 ] 00:14:42.171 }, 00:14:42.171 { 00:14:42.171 "subsystem": "iobuf", 00:14:42.171 "config": [ 00:14:42.171 { 00:14:42.171 "method": "iobuf_set_options", 00:14:42.171 "params": { 00:14:42.171 "small_pool_count": 8192, 00:14:42.171 "large_pool_count": 1024, 00:14:42.171 "small_bufsize": 8192, 00:14:42.171 "large_bufsize": 135168 00:14:42.171 } 00:14:42.171 } 00:14:42.171 ] 00:14:42.171 }, 00:14:42.171 { 00:14:42.171 "subsystem": "sock", 00:14:42.171 "config": [ 00:14:42.171 { 00:14:42.171 "method": "sock_set_default_impl", 00:14:42.171 "params": { 00:14:42.171 "impl_name": "uring" 00:14:42.171 } 00:14:42.171 }, 00:14:42.171 { 00:14:42.171 "method": "sock_impl_set_options", 00:14:42.171 "params": { 00:14:42.171 "impl_name": "ssl", 00:14:42.172 "recv_buf_size": 4096, 00:14:42.172 "send_buf_size": 4096, 00:14:42.172 "enable_recv_pipe": true, 00:14:42.172 "enable_quickack": false, 00:14:42.172 "enable_placement_id": 0, 00:14:42.172 "enable_zerocopy_send_server": true, 00:14:42.172 "enable_zerocopy_send_client": false, 00:14:42.172 "zerocopy_threshold": 0, 00:14:42.172 "tls_version": 0, 00:14:42.172 "enable_ktls": false 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "sock_impl_set_options", 00:14:42.172 "params": { 00:14:42.172 "impl_name": "posix", 00:14:42.172 "recv_buf_size": 2097152, 00:14:42.172 "send_buf_size": 2097152, 00:14:42.172 "enable_recv_pipe": true, 00:14:42.172 "enable_quickack": false, 00:14:42.172 "enable_placement_id": 0, 00:14:42.172 "enable_zerocopy_send_server": true, 00:14:42.172 "enable_zerocopy_send_client": false, 00:14:42.172 "zerocopy_threshold": 0, 00:14:42.172 "tls_version": 0, 00:14:42.172 "enable_ktls": false 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "sock_impl_set_options", 00:14:42.172 "params": { 00:14:42.172 "impl_name": "uring", 00:14:42.172 "recv_buf_size": 2097152, 00:14:42.172 "send_buf_size": 2097152, 00:14:42.172 "enable_recv_pipe": true, 00:14:42.172 "enable_quickack": false, 00:14:42.172 "enable_placement_id": 0, 00:14:42.172 "enable_zerocopy_send_server": false, 00:14:42.172 "enable_zerocopy_send_client": false, 00:14:42.172 "zerocopy_threshold": 0, 00:14:42.172 "tls_version": 0, 00:14:42.172 "enable_ktls": false 00:14:42.172 } 00:14:42.172 } 00:14:42.172 ] 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "subsystem": "vmd", 00:14:42.172 "config": [] 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "subsystem": "accel", 00:14:42.172 "config": [ 00:14:42.172 { 00:14:42.172 "method": "accel_set_options", 00:14:42.172 "params": { 00:14:42.172 "small_cache_size": 128, 00:14:42.172 "large_cache_size": 16, 00:14:42.172 "task_count": 2048, 00:14:42.172 "sequence_count": 2048, 00:14:42.172 "buf_count": 2048 00:14:42.172 } 00:14:42.172 } 00:14:42.172 ] 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "subsystem": "bdev", 00:14:42.172 "config": [ 00:14:42.172 { 00:14:42.172 "method": "bdev_set_options", 00:14:42.172 "params": { 00:14:42.172 "bdev_io_pool_size": 65535, 00:14:42.172 "bdev_io_cache_size": 256, 00:14:42.172 "bdev_auto_examine": true, 00:14:42.172 "iobuf_small_cache_size": 128, 00:14:42.172 "iobuf_large_cache_size": 16 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "bdev_raid_set_options", 00:14:42.172 "params": { 00:14:42.172 "process_window_size_kb": 1024, 00:14:42.172 "process_max_bandwidth_mb_sec": 0 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "bdev_iscsi_set_options", 00:14:42.172 "params": { 00:14:42.172 "timeout_sec": 30 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "bdev_nvme_set_options", 00:14:42.172 "params": { 00:14:42.172 "action_on_timeout": "none", 00:14:42.172 "timeout_us": 0, 00:14:42.172 "timeout_admin_us": 0, 00:14:42.172 "keep_alive_timeout_ms": 10000, 00:14:42.172 "arbitration_burst": 0, 00:14:42.172 "low_priority_weight": 0, 00:14:42.172 "medium_priority_weight": 0, 00:14:42.172 "high_priority_weight": 0, 00:14:42.172 "nvme_adminq_poll_period_us": 10000, 00:14:42.172 "nvme_ioq_poll_period_us": 0, 00:14:42.172 "io_queue_requests": 512, 00:14:42.172 "delay_cmd_submit": true, 00:14:42.172 "transport_retry_count": 4, 00:14:42.172 "bdev_retry_count": 3, 00:14:42.172 "transport_ack_timeout": 0, 00:14:42.172 "ctrlr_loss_timeout_sec": 0, 00:14:42.172 "reconnect_delay_sec": 0, 00:14:42.172 "fast_io_fail_timeout_sec": 0, 00:14:42.172 "disable_auto_failback": false, 00:14:42.172 "generate_uuids": false, 00:14:42.172 "transport_tos": 0, 00:14:42.172 "nvme_error_stat": false, 00:14:42.172 "rdma_srq_size": 0, 00:14:42.172 "io_path_stat": false, 00:14:42.172 "allow_accel_sequence": false, 00:14:42.172 "rdma_max_cq_size": 0, 00:14:42.172 "rdma_cm_event_timeout_ms": 0, 00:14:42.172 "dhchap_digests": [ 00:14:42.172 "sha256", 00:14:42.172 "sha384", 00:14:42.172 "sha512" 00:14:42.172 ], 00:14:42.172 "dhchap_dhgroups": [ 00:14:42.172 "null", 00:14:42.172 "ffdhe2048", 00:14:42.172 "ffdhe3072", 00:14:42.172 "ffdhe4096", 00:14:42.172 "ffdhe6144", 00:14:42.172 "ffdhe8192" 00:14:42.172 ] 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "bdev_nvme_attach_controller", 00:14:42.172 "params": { 00:14:42.172 "name": "nvme0", 00:14:42.172 "trtype": "TCP", 00:14:42.172 "adrfam": "IPv4", 00:14:42.172 "traddr": "10.0.0.2", 00:14:42.172 "trsvcid": "4420", 00:14:42.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:42.172 "prchk_reftag": false, 00:14:42.172 "prchk_guard": false, 00:14:42.172 "ctrlr_loss_timeout_sec": 0, 00:14:42.172 "reconnect_delay_sec": 0, 00:14:42.172 "fast_io_fail_timeout_sec": 0, 00:14:42.172 "psk": "key0", 00:14:42.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.172 "hdgst": false, 00:14:42.172 "ddgst": false 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "bdev_nvme_set_hotplug", 00:14:42.172 "params": { 00:14:42.172 "period_us": 100000, 00:14:42.172 "enable": false 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "bdev_enable_histogram", 00:14:42.172 "params": { 00:14:42.172 "name": "nvme0n1", 00:14:42.172 "enable": true 00:14:42.172 } 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "method": "bdev_wait_for_examine" 00:14:42.172 } 00:14:42.172 ] 00:14:42.172 }, 00:14:42.172 { 00:14:42.172 "subsystem": "nbd", 00:14:42.172 "config": [] 00:14:42.172 } 00:14:42.172 ] 00:14:42.172 }' 00:14:42.172 [2024-07-24 19:53:10.726322] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:42.172 [2024-07-24 19:53:10.726456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73288 ] 00:14:42.430 [2024-07-24 19:53:10.878430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.430 [2024-07-24 19:53:10.990482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.689 [2024-07-24 19:53:11.129507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:42.689 [2024-07-24 19:53:11.180632] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.256 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.256 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:43.256 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:43.256 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:14:43.256 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.256 19:53:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.519 Running I/O for 1 seconds... 00:14:44.455 00:14:44.455 Latency(us) 00:14:44.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.455 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:44.455 Verification LBA range: start 0x0 length 0x2000 00:14:44.455 nvme0n1 : 1.03 3482.41 13.60 0.00 0.00 36234.71 7804.74 34555.35 00:14:44.455 =================================================================================================================== 00:14:44.455 Total : 3482.41 13.60 0.00 0.00 36234.71 7804.74 34555.35 00:14:44.455 0 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:44.455 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:44.455 nvmf_trace.0 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 73288 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73288 ']' 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73288 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73288 00:14:44.714 killing process with pid 73288 00:14:44.714 Received shutdown signal, test time was about 1.000000 seconds 00:14:44.714 00:14:44.714 Latency(us) 00:14:44.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.714 =================================================================================================================== 00:14:44.714 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73288' 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73288 00:14:44.714 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73288 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.972 rmmod nvme_tcp 00:14:44.972 rmmod nvme_fabrics 00:14:44.972 rmmod nvme_keyring 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 73256 ']' 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 73256 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 73256 ']' 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 73256 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73256 00:14:44.972 killing process with pid 73256 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73256' 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 73256 00:14:44.972 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 73256 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.h74Z6jgWdr /tmp/tmp.sD37nB2xpa /tmp/tmp.MvmRCQKYCT 00:14:45.230 ************************************ 00:14:45.230 END TEST nvmf_tls 00:14:45.230 ************************************ 00:14:45.230 00:14:45.230 real 1m26.353s 00:14:45.230 user 2m16.924s 00:14:45.230 sys 0m27.786s 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.230 ************************************ 00:14:45.230 START TEST nvmf_fips 00:14:45.230 ************************************ 00:14:45.230 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:45.230 * Looking for test storage... 00:14:45.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.489 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:45.490 19:53:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:45.490 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:45.491 Error setting digest 00:14:45.491 009263419B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:45.491 009263419B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:45.491 Cannot find device "nvmf_tgt_br" 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.491 Cannot find device "nvmf_tgt_br2" 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:45.491 Cannot find device "nvmf_tgt_br" 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:45.491 Cannot find device "nvmf_tgt_br2" 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:45.491 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.749 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:45.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:14:45.750 00:14:45.750 --- 10.0.0.2 ping statistics --- 00:14:45.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.750 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:45.750 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.750 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:45.750 00:14:45.750 --- 10.0.0.3 ping statistics --- 00:14:45.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.750 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:45.750 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:46.009 00:14:46.009 --- 10.0.0.1 ping statistics --- 00:14:46.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.009 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=73561 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 73561 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73561 ']' 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.009 19:53:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:46.009 [2024-07-24 19:53:14.537504] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:46.009 [2024-07-24 19:53:14.537606] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.009 [2024-07-24 19:53:14.676537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.268 [2024-07-24 19:53:14.791091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.268 [2024-07-24 19:53:14.791162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.268 [2024-07-24 19:53:14.791189] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.268 [2024-07-24 19:53:14.791198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.268 [2024-07-24 19:53:14.791205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.268 [2024-07-24 19:53:14.791234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.268 [2024-07-24 19:53:14.849038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:46.832 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.832 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:46.832 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.832 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:46.832 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:47.090 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:47.348 [2024-07-24 19:53:15.804075] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.348 [2024-07-24 19:53:15.820204] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.348 [2024-07-24 19:53:15.820419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.348 [2024-07-24 19:53:15.852488] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:47.348 malloc0 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=73600 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 73600 /var/tmp/bdevperf.sock 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 73600 ']' 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.348 19:53:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 [2024-07-24 19:53:15.962556] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:14:47.348 [2024-07-24 19:53:15.962655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73600 ] 00:14:47.607 [2024-07-24 19:53:16.102095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.607 [2024-07-24 19:53:16.209512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.607 [2024-07-24 19:53:16.262062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:48.542 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.542 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:14:48.543 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:48.801 [2024-07-24 19:53:17.283253] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:48.801 [2024-07-24 19:53:17.283424] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:48.801 TLSTESTn1 00:14:48.801 19:53:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.059 Running I/O for 10 seconds... 00:14:59.108 00:14:59.108 Latency(us) 00:14:59.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.108 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:59.108 Verification LBA range: start 0x0 length 0x2000 00:14:59.108 TLSTESTn1 : 10.02 3884.03 15.17 0.00 0.00 32889.73 7864.32 32648.84 00:14:59.108 =================================================================================================================== 00:14:59.108 Total : 3884.03 15.17 0.00 0.00 32889.73 7864.32 32648.84 00:14:59.108 0 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:59.108 nvmf_trace.0 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73600 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73600 ']' 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73600 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73600 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:59.108 killing process with pid 73600 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73600' 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73600 00:14:59.108 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.108 00:14:59.108 Latency(us) 00:14:59.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.108 =================================================================================================================== 00:14:59.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.108 [2024-07-24 19:53:27.636658] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:59.108 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73600 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:59.366 rmmod nvme_tcp 00:14:59.366 rmmod nvme_fabrics 00:14:59.366 rmmod nvme_keyring 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 73561 ']' 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 73561 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 73561 ']' 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 73561 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73561 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:59.366 killing process with pid 73561 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73561' 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 73561 00:14:59.366 [2024-07-24 19:53:27.983081] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:59.366 19:53:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 73561 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:59.624 00:14:59.624 real 0m14.432s 00:14:59.624 user 0m19.712s 00:14:59.624 sys 0m5.831s 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.624 19:53:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:59.624 ************************************ 00:14:59.624 END TEST nvmf_fips 00:14:59.624 ************************************ 00:14:59.883 19:53:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:14:59.883 19:53:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:14:59.883 19:53:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:14:59.883 00:14:59.883 real 4m35.525s 00:14:59.883 user 9m36.715s 00:14:59.883 sys 1m1.610s 00:14:59.883 19:53:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.883 19:53:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:59.883 ************************************ 00:14:59.883 END TEST nvmf_target_extra 00:14:59.883 ************************************ 00:14:59.883 19:53:28 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:59.883 19:53:28 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:59.883 19:53:28 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.883 19:53:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.883 ************************************ 00:14:59.883 START TEST nvmf_host 00:14:59.883 ************************************ 00:14:59.883 19:53:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:59.883 * Looking for test storage... 00:14:59.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:59.883 19:53:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.883 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:59.883 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.883 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:59.884 ************************************ 00:14:59.884 START TEST nvmf_identify 00:14:59.884 ************************************ 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:59.884 * Looking for test storage... 00:14:59.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.884 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:00.143 Cannot find device "nvmf_tgt_br" 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:00.143 Cannot find device "nvmf_tgt_br2" 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:00.143 Cannot find device "nvmf_tgt_br" 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:00.143 Cannot find device "nvmf_tgt_br2" 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:00.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:00.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:00.143 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:00.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:15:00.401 00:15:00.401 --- 10.0.0.2 ping statistics --- 00:15:00.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.401 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:00.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:15:00.401 00:15:00.401 --- 10.0.0.3 ping statistics --- 00:15:00.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.401 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:00.401 00:15:00.401 --- 10.0.0.1 ping statistics --- 00:15:00.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.401 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73975 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73975 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 73975 ']' 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.401 19:53:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:00.401 [2024-07-24 19:53:28.961288] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:00.401 [2024-07-24 19:53:28.961370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.671 [2024-07-24 19:53:29.102995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.671 [2024-07-24 19:53:29.223588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.671 [2024-07-24 19:53:29.223664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.671 [2024-07-24 19:53:29.223675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.671 [2024-07-24 19:53:29.223683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.671 [2024-07-24 19:53:29.223691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.671 [2024-07-24 19:53:29.223854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.671 [2024-07-24 19:53:29.224631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.671 [2024-07-24 19:53:29.224798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.671 [2024-07-24 19:53:29.224859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.671 [2024-07-24 19:53:29.276852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.241 [2024-07-24 19:53:29.880259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.241 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.504 Malloc0 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.504 [2024-07-24 19:53:29.979015] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.504 19:53:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:01.504 [ 00:15:01.504 { 00:15:01.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.504 "subtype": "Discovery", 00:15:01.504 "listen_addresses": [ 00:15:01.504 { 00:15:01.504 "trtype": "TCP", 00:15:01.504 "adrfam": "IPv4", 00:15:01.504 "traddr": "10.0.0.2", 00:15:01.504 "trsvcid": "4420" 00:15:01.504 } 00:15:01.504 ], 00:15:01.504 "allow_any_host": true, 00:15:01.504 "hosts": [] 00:15:01.504 }, 00:15:01.504 { 00:15:01.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.504 "subtype": "NVMe", 00:15:01.504 "listen_addresses": [ 00:15:01.504 { 00:15:01.504 "trtype": "TCP", 00:15:01.504 "adrfam": "IPv4", 00:15:01.504 "traddr": "10.0.0.2", 00:15:01.504 "trsvcid": "4420" 00:15:01.504 } 00:15:01.504 ], 00:15:01.504 "allow_any_host": true, 00:15:01.504 "hosts": [], 00:15:01.504 "serial_number": "SPDK00000000000001", 00:15:01.504 "model_number": "SPDK bdev Controller", 00:15:01.504 "max_namespaces": 32, 00:15:01.504 "min_cntlid": 1, 00:15:01.504 "max_cntlid": 65519, 00:15:01.504 "namespaces": [ 00:15:01.504 { 00:15:01.504 "nsid": 1, 00:15:01.504 "bdev_name": "Malloc0", 00:15:01.504 "name": "Malloc0", 00:15:01.504 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:01.504 "eui64": "ABCDEF0123456789", 00:15:01.504 "uuid": "682b3e74-6d2b-44cc-9bb7-2582e803d878" 00:15:01.504 } 00:15:01.504 ] 00:15:01.504 } 00:15:01.504 ] 00:15:01.504 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.504 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:01.504 [2024-07-24 19:53:30.033101] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:01.504 [2024-07-24 19:53:30.033157] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74010 ] 00:15:01.504 [2024-07-24 19:53:30.169928] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:01.504 [2024-07-24 19:53:30.170014] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:01.504 [2024-07-24 19:53:30.170021] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:01.504 [2024-07-24 19:53:30.170037] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:01.504 [2024-07-24 19:53:30.170048] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:01.504 [2024-07-24 19:53:30.170207] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:01.504 [2024-07-24 19:53:30.170258] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7642c0 0 00:15:01.775 [2024-07-24 19:53:30.182764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:01.775 [2024-07-24 19:53:30.182791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:01.775 [2024-07-24 19:53:30.182798] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:01.775 [2024-07-24 19:53:30.182802] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:01.775 [2024-07-24 19:53:30.182851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.182858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.182863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.775 [2024-07-24 19:53:30.182878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:01.775 [2024-07-24 19:53:30.182909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.775 [2024-07-24 19:53:30.190761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.775 [2024-07-24 19:53:30.190785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.775 [2024-07-24 19:53:30.190791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.190797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.775 [2024-07-24 19:53:30.190811] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:01.775 [2024-07-24 19:53:30.190821] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:01.775 [2024-07-24 19:53:30.190827] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:01.775 [2024-07-24 19:53:30.190847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.190852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.190856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.775 [2024-07-24 19:53:30.190866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.775 [2024-07-24 19:53:30.190896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.775 [2024-07-24 19:53:30.190953] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.775 [2024-07-24 19:53:30.190961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.775 [2024-07-24 19:53:30.190965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.190970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.775 [2024-07-24 19:53:30.190976] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:01.775 [2024-07-24 19:53:30.190984] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:01.775 [2024-07-24 19:53:30.190993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.190997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.191001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.775 [2024-07-24 19:53:30.191009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.775 [2024-07-24 19:53:30.191028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.775 [2024-07-24 19:53:30.191074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.775 [2024-07-24 19:53:30.191081] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.775 [2024-07-24 19:53:30.191085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.191090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.775 [2024-07-24 19:53:30.191096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:01.775 [2024-07-24 19:53:30.191105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.775 [2024-07-24 19:53:30.191113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.191118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.191122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.775 [2024-07-24 19:53:30.191129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.775 [2024-07-24 19:53:30.191147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.775 [2024-07-24 19:53:30.191195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.775 [2024-07-24 19:53:30.191202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.775 [2024-07-24 19:53:30.191206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.191210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.775 [2024-07-24 19:53:30.191216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.775 [2024-07-24 19:53:30.191227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.191232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.191236] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.775 [2024-07-24 19:53:30.191243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.775 [2024-07-24 19:53:30.191261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.775 [2024-07-24 19:53:30.191304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.775 [2024-07-24 19:53:30.191312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.775 [2024-07-24 19:53:30.191316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.775 [2024-07-24 19:53:30.191320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.775 [2024-07-24 19:53:30.191325] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:01.775 [2024-07-24 19:53:30.191331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:01.775 [2024-07-24 19:53:30.191339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.775 [2024-07-24 19:53:30.191445] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:01.775 [2024-07-24 19:53:30.191451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.775 [2024-07-24 19:53:30.191461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.191477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.776 [2024-07-24 19:53:30.191495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.776 [2024-07-24 19:53:30.191552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.776 [2024-07-24 19:53:30.191559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.776 [2024-07-24 19:53:30.191563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191567] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.776 [2024-07-24 19:53:30.191573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.776 [2024-07-24 19:53:30.191583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.191600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.776 [2024-07-24 19:53:30.191617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.776 [2024-07-24 19:53:30.191661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.776 [2024-07-24 19:53:30.191668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.776 [2024-07-24 19:53:30.191672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.776 [2024-07-24 19:53:30.191681] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.776 [2024-07-24 19:53:30.191687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:01.776 [2024-07-24 19:53:30.191695] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:01.776 [2024-07-24 19:53:30.191706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.776 [2024-07-24 19:53:30.191717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.191730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.776 [2024-07-24 19:53:30.191763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.776 [2024-07-24 19:53:30.191851] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.776 [2024-07-24 19:53:30.191858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.776 [2024-07-24 19:53:30.191863] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191867] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7642c0): datao=0, datal=4096, cccid=0 00:15:01.776 [2024-07-24 19:53:30.191872] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7a5940) on tqpair(0x7642c0): expected_datao=0, payload_size=4096 00:15:01.776 [2024-07-24 19:53:30.191879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191887] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191892] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.776 [2024-07-24 19:53:30.191907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.776 [2024-07-24 19:53:30.191911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.776 [2024-07-24 19:53:30.191925] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:01.776 [2024-07-24 19:53:30.191931] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:01.776 [2024-07-24 19:53:30.191936] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:01.776 [2024-07-24 19:53:30.191946] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:01.776 [2024-07-24 19:53:30.191952] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:01.776 [2024-07-24 19:53:30.191957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:01.776 [2024-07-24 19:53:30.191967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.776 [2024-07-24 19:53:30.191986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.191996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.192004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:01.776 [2024-07-24 19:53:30.192025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.776 [2024-07-24 19:53:30.192082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.776 [2024-07-24 19:53:30.192089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.776 [2024-07-24 19:53:30.192093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.776 [2024-07-24 19:53:30.192106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.192121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.776 [2024-07-24 19:53:30.192128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.192142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.776 [2024-07-24 19:53:30.192149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.192174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.776 [2024-07-24 19:53:30.192180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.192194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.776 [2024-07-24 19:53:30.192200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.776 [2024-07-24 19:53:30.192209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.776 [2024-07-24 19:53:30.192216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.192227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.776 [2024-07-24 19:53:30.192252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5940, cid 0, qid 0 00:15:01.776 [2024-07-24 19:53:30.192260] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5ac0, cid 1, qid 0 00:15:01.776 [2024-07-24 19:53:30.192265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5c40, cid 2, qid 0 00:15:01.776 [2024-07-24 19:53:30.192270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.776 [2024-07-24 19:53:30.192275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5f40, cid 4, qid 0 00:15:01.776 [2024-07-24 19:53:30.192357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.776 [2024-07-24 19:53:30.192364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.776 [2024-07-24 19:53:30.192368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5f40) on tqpair=0x7642c0 00:15:01.776 [2024-07-24 19:53:30.192378] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:01.776 [2024-07-24 19:53:30.192384] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:01.776 [2024-07-24 19:53:30.192395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7642c0) 00:15:01.776 [2024-07-24 19:53:30.192408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.776 [2024-07-24 19:53:30.192425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5f40, cid 4, qid 0 00:15:01.776 [2024-07-24 19:53:30.192482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.776 [2024-07-24 19:53:30.192489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.776 [2024-07-24 19:53:30.192493] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.776 [2024-07-24 19:53:30.192497] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7642c0): datao=0, datal=4096, cccid=4 00:15:01.777 [2024-07-24 19:53:30.192502] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7a5f40) on tqpair(0x7642c0): expected_datao=0, payload_size=4096 00:15:01.777 [2024-07-24 19:53:30.192507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192515] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192519] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.777 [2024-07-24 19:53:30.192534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.777 [2024-07-24 19:53:30.192538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5f40) on tqpair=0x7642c0 00:15:01.777 [2024-07-24 19:53:30.192556] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:01.777 [2024-07-24 19:53:30.192584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7642c0) 00:15:01.777 [2024-07-24 19:53:30.192598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.777 [2024-07-24 19:53:30.192606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7642c0) 00:15:01.777 [2024-07-24 19:53:30.192622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.777 [2024-07-24 19:53:30.192646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5f40, cid 4, qid 0 00:15:01.777 [2024-07-24 19:53:30.192654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a60c0, cid 5, qid 0 00:15:01.777 [2024-07-24 19:53:30.192768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.777 [2024-07-24 19:53:30.192777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.777 [2024-07-24 19:53:30.192781] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192785] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7642c0): datao=0, datal=1024, cccid=4 00:15:01.777 [2024-07-24 19:53:30.192790] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7a5f40) on tqpair(0x7642c0): expected_datao=0, payload_size=1024 00:15:01.777 [2024-07-24 19:53:30.192795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192802] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192807] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.777 [2024-07-24 19:53:30.192819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.777 [2024-07-24 19:53:30.192823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a60c0) on tqpair=0x7642c0 00:15:01.777 [2024-07-24 19:53:30.192845] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.777 [2024-07-24 19:53:30.192853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.777 [2024-07-24 19:53:30.192857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5f40) on tqpair=0x7642c0 00:15:01.777 [2024-07-24 19:53:30.192874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7642c0) 00:15:01.777 [2024-07-24 19:53:30.192887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.777 [2024-07-24 19:53:30.192912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5f40, cid 4, qid 0 00:15:01.777 [2024-07-24 19:53:30.192979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.777 [2024-07-24 19:53:30.192986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.777 [2024-07-24 19:53:30.192990] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.192994] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7642c0): datao=0, datal=3072, cccid=4 00:15:01.777 [2024-07-24 19:53:30.192999] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7a5f40) on tqpair(0x7642c0): expected_datao=0, payload_size=3072 00:15:01.777 [2024-07-24 19:53:30.193004] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193011] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193015] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193023] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.777 [2024-07-24 19:53:30.193030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.777 [2024-07-24 19:53:30.193033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5f40) on tqpair=0x7642c0 00:15:01.777 [2024-07-24 19:53:30.193049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193053] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7642c0) 00:15:01.777 [2024-07-24 19:53:30.193061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.777 [2024-07-24 19:53:30.193084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5f40, cid 4, qid 0 00:15:01.777 [2024-07-24 19:53:30.193144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.777 [2024-07-24 19:53:30.193151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.777 [2024-07-24 19:53:30.193155] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193159] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7642c0): datao=0, datal=8, cccid=4 00:15:01.777 [2024-07-24 19:53:30.193164] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x7a5f40) on tqpair(0x7642c0): expected_datao=0, payload_size=8 00:15:01.777 [2024-07-24 19:53:30.193169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193176] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193180] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.777 [2024-07-24 19:53:30.193195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.777 [2024-07-24 19:53:30.193202] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.777 [2024-07-24 19:53:30.193206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.777 ===================================================== 00:15:01.777 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:01.777 ===================================================== 00:15:01.777 Controller Capabilities/Features 00:15:01.777 ================================ 00:15:01.777 Vendor ID: 0000 00:15:01.777 Subsystem Vendor ID: 0000 00:15:01.777 Serial Number: .................... 00:15:01.777 Model Number: ........................................ 00:15:01.777 Firmware Version: 24.09 00:15:01.777 Recommended Arb Burst: 0 00:15:01.777 IEEE OUI Identifier: 00 00 00 00:15:01.777 Multi-path I/O 00:15:01.777 May have multiple subsystem ports: No 00:15:01.777 May have multiple controllers: No 00:15:01.777 Associated with SR-IOV VF: No 00:15:01.777 Max Data Transfer Size: 131072 00:15:01.777 Max Number of Namespaces: 0 00:15:01.777 Max Number of I/O Queues: 1024 00:15:01.777 NVMe Specification Version (VS): 1.3 00:15:01.777 NVMe Specification Version (Identify): 1.3 00:15:01.777 Maximum Queue Entries: 128 00:15:01.777 Contiguous Queues Required: Yes 00:15:01.777 Arbitration Mechanisms Supported 00:15:01.777 Weighted Round Robin: Not Supported 00:15:01.777 Vendor Specific: Not Supported 00:15:01.777 Reset Timeout: 15000 ms 00:15:01.777 Doorbell Stride: 4 bytes 00:15:01.777 NVM Subsystem Reset: Not Supported 00:15:01.777 Command Sets Supported 00:15:01.777 NVM Command Set: Supported 00:15:01.777 Boot Partition: Not Supported 00:15:01.777 Memory Page Size Minimum: 4096 bytes 00:15:01.777 Memory Page Size Maximum: 4096 bytes 00:15:01.777 Persistent Memory Region: Not Supported 00:15:01.777 Optional Asynchronous Events Supported 00:15:01.777 Namespace Attribute Notices: Not Supported 00:15:01.777 Firmware Activation Notices: Not Supported 00:15:01.777 ANA Change Notices: Not Supported 00:15:01.777 PLE Aggregate Log Change Notices: Not Supported 00:15:01.777 LBA Status Info Alert Notices: Not Supported 00:15:01.777 EGE Aggregate Log Change Notices: Not Supported 00:15:01.777 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.777 Zone Descriptor Change Notices: Not Supported 00:15:01.777 Discovery Log Change Notices: Supported 00:15:01.777 Controller Attributes 00:15:01.777 128-bit Host Identifier: Not Supported 00:15:01.777 Non-Operational Permissive Mode: Not Supported 00:15:01.777 NVM Sets: Not Supported 00:15:01.777 Read Recovery Levels: Not Supported 00:15:01.777 Endurance Groups: Not Supported 00:15:01.777 Predictable Latency Mode: Not Supported 00:15:01.777 Traffic Based Keep ALive: Not Supported 00:15:01.777 Namespace Granularity: Not Supported 00:15:01.777 SQ Associations: Not Supported 00:15:01.777 UUID List: Not Supported 00:15:01.777 Multi-Domain Subsystem: Not Supported 00:15:01.777 Fixed Capacity Management: Not Supported 00:15:01.777 Variable Capacity Management: Not Supported 00:15:01.777 Delete Endurance Group: Not Supported 00:15:01.777 Delete NVM Set: Not Supported 00:15:01.777 Extended LBA Formats Supported: Not Supported 00:15:01.777 Flexible Data Placement Supported: Not Supported 00:15:01.777 00:15:01.777 Controller Memory Buffer Support 00:15:01.777 ================================ 00:15:01.778 Supported: No 00:15:01.778 00:15:01.778 Persistent Memory Region Support 00:15:01.778 ================================ 00:15:01.778 Supported: No 00:15:01.778 00:15:01.778 Admin Command Set Attributes 00:15:01.778 ============================ 00:15:01.778 Security Send/Receive: Not Supported 00:15:01.778 Format NVM: Not Supported 00:15:01.778 Firmware Activate/Download: Not Supported 00:15:01.778 Namespace Management: Not Supported 00:15:01.778 Device Self-Test: Not Supported 00:15:01.778 Directives: Not Supported 00:15:01.778 NVMe-MI: Not Supported 00:15:01.778 Virtualization Management: Not Supported 00:15:01.778 Doorbell Buffer Config: Not Supported 00:15:01.778 Get LBA Status Capability: Not Supported 00:15:01.778 Command & Feature Lockdown Capability: Not Supported 00:15:01.778 Abort Command Limit: 1 00:15:01.778 Async Event Request Limit: 4 00:15:01.778 Number of Firmware Slots: N/A 00:15:01.778 Firmware Slot 1 Read-Only: N/A 00:15:01.778 Firmware Activation Without Reset: N/A 00:15:01.778 Multiple Update Detection Support: N/A 00:15:01.778 Firmware Update Granularity: No Information Provided 00:15:01.778 Per-Namespace SMART Log: No 00:15:01.778 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.778 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:01.778 Command Effects Log Page: Not Supported 00:15:01.778 Get Log Page Extended Data: Supported 00:15:01.778 Telemetry Log Pages: Not Supported 00:15:01.778 Persistent Event Log Pages: Not Supported 00:15:01.778 Supported Log Pages Log Page: May Support 00:15:01.778 Commands Supported & Effects Log Page: Not Supported 00:15:01.778 Feature Identifiers & Effects Log Page:May Support 00:15:01.778 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.778 Data Area 4 for Telemetry Log: Not Supported 00:15:01.778 Error Log Page Entries Supported: 128 00:15:01.778 Keep Alive: Not Supported 00:15:01.778 00:15:01.778 NVM Command Set Attributes 00:15:01.778 ========================== 00:15:01.778 Submission Queue Entry Size 00:15:01.778 Max: 1 00:15:01.778 Min: 1 00:15:01.778 Completion Queue Entry Size 00:15:01.778 Max: 1 00:15:01.778 Min: 1 00:15:01.778 Number of Namespaces: 0 00:15:01.778 Compare Command: Not Supported 00:15:01.778 Write Uncorrectable Command: Not Supported 00:15:01.778 Dataset Management Command: Not Supported 00:15:01.778 Write Zeroes Command: Not Supported 00:15:01.778 Set Features Save Field: Not Supported 00:15:01.778 Reservations: Not Supported 00:15:01.778 Timestamp: Not Supported 00:15:01.778 Copy: Not Supported 00:15:01.778 Volatile Write Cache: Not Present 00:15:01.778 Atomic Write Unit (Normal): 1 00:15:01.778 Atomic Write Unit (PFail): 1 00:15:01.778 Atomic Compare & Write Unit: 1 00:15:01.778 Fused Compare & Write: Supported 00:15:01.778 Scatter-Gather List 00:15:01.778 SGL Command Set: Supported 00:15:01.778 SGL Keyed: Supported 00:15:01.778 SGL Bit Bucket Descriptor: Not Supported 00:15:01.778 SGL Metadata Pointer: Not Supported 00:15:01.778 Oversized SGL: Not Supported 00:15:01.778 SGL Metadata Address: Not Supported 00:15:01.778 SGL Offset: Supported 00:15:01.778 Transport SGL Data Block: Not Supported 00:15:01.778 Replay Protected Memory Block: Not Supported 00:15:01.778 00:15:01.778 Firmware Slot Information 00:15:01.778 ========================= 00:15:01.778 Active slot: 0 00:15:01.778 00:15:01.778 00:15:01.778 Error Log 00:15:01.778 ========= 00:15:01.778 00:15:01.778 Active Namespaces 00:15:01.778 ================= 00:15:01.778 Discovery Log Page 00:15:01.778 ================== 00:15:01.778 Generation Counter: 2 00:15:01.778 Number of Records: 2 00:15:01.778 Record Format: 0 00:15:01.778 00:15:01.778 Discovery Log Entry 0 00:15:01.778 ---------------------- 00:15:01.778 Transport Type: 3 (TCP) 00:15:01.778 Address Family: 1 (IPv4) 00:15:01.778 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:01.778 Entry Flags: 00:15:01.778 Duplicate Returned Information: 1 00:15:01.778 Explicit Persistent Connection Support for Discovery: 1 00:15:01.778 Transport Requirements: 00:15:01.778 Secure Channel: Not Required 00:15:01.778 Port ID: 0 (0x0000) 00:15:01.778 Controller ID: 65535 (0xffff) 00:15:01.778 Admin Max SQ Size: 128 00:15:01.778 Transport Service Identifier: 4420 00:15:01.778 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:01.778 Transport Address: 10.0.0.2 00:15:01.778 Discovery Log Entry 1 00:15:01.778 ---------------------- 00:15:01.778 Transport Type: 3 (TCP) 00:15:01.778 Address Family: 1 (IPv4) 00:15:01.778 Subsystem Type: 2 (NVM Subsystem) 00:15:01.778 Entry Flags: 00:15:01.778 Duplicate Returned Information: 0 00:15:01.778 Explicit Persistent Connection Support for Discovery: 0 00:15:01.778 Transport Requirements: 00:15:01.778 Secure Channel: Not Required 00:15:01.778 Port ID: 0 (0x0000) 00:15:01.778 Controller ID: 65535 (0xffff) 00:15:01.778 Admin Max SQ Size: 128 00:15:01.778 Transport Service Identifier: 4420 00:15:01.778 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:01.778 Transport Address: 10.0.0.2 [2024-07-24 19:53:30.193210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5f40) on tqpair=0x7642c0 00:15:01.778 [2024-07-24 19:53:30.193311] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:01.778 [2024-07-24 19:53:30.193324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5940) on tqpair=0x7642c0 00:15:01.778 [2024-07-24 19:53:30.193332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.778 [2024-07-24 19:53:30.193338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5ac0) on tqpair=0x7642c0 00:15:01.778 [2024-07-24 19:53:30.193343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.778 [2024-07-24 19:53:30.193348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5c40) on tqpair=0x7642c0 00:15:01.778 [2024-07-24 19:53:30.193353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.778 [2024-07-24 19:53:30.193358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.778 [2024-07-24 19:53:30.193363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.778 [2024-07-24 19:53:30.193373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.778 [2024-07-24 19:53:30.193377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.778 [2024-07-24 19:53:30.193381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.778 [2024-07-24 19:53:30.193389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.778 [2024-07-24 19:53:30.193412] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.778 [2024-07-24 19:53:30.193456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.778 [2024-07-24 19:53:30.193463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.778 [2024-07-24 19:53:30.193467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.778 [2024-07-24 19:53:30.193471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.778 [2024-07-24 19:53:30.193484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.778 [2024-07-24 19:53:30.193489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.778 [2024-07-24 19:53:30.193493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.778 [2024-07-24 19:53:30.193500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.778 [2024-07-24 19:53:30.193523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.778 [2024-07-24 19:53:30.193587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.778 [2024-07-24 19:53:30.193594] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.778 [2024-07-24 19:53:30.193598] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.778 [2024-07-24 19:53:30.193602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.778 [2024-07-24 19:53:30.193608] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:01.778 [2024-07-24 19:53:30.193613] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:01.778 [2024-07-24 19:53:30.193623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.778 [2024-07-24 19:53:30.193628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.778 [2024-07-24 19:53:30.193632] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.778 [2024-07-24 19:53:30.193639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.778 [2024-07-24 19:53:30.193657] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.193702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.193709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.193712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193717] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.193728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193751] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.193761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.193780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.193830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.193837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.193841] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.193856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.193872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.193890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.193937] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.193944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.193948] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.193963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.193972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.193979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.193996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.194042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.194049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.194053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.194068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194076] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.194084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.194100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.194148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.194155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.194159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.194174] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.194190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.194206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.194258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.194265] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.194269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.194284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.194300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.194317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.194365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.194371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.194375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.194390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.194406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.194423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.194465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.194472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.194476] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.194491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.194507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.194523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.194571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.194578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.194582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.194597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194605] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.194613] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.194629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.194677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.194684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.194688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.194703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.194712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.194719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.198753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.198780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.198789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.198793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.198797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.779 [2024-07-24 19:53:30.198812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.198817] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.779 [2024-07-24 19:53:30.198821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7642c0) 00:15:01.779 [2024-07-24 19:53:30.198831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.779 [2024-07-24 19:53:30.198856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x7a5dc0, cid 3, qid 0 00:15:01.779 [2024-07-24 19:53:30.198904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.779 [2024-07-24 19:53:30.198911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.779 [2024-07-24 19:53:30.198916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.198920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x7a5dc0) on tqpair=0x7642c0 00:15:01.780 [2024-07-24 19:53:30.198929] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:15:01.780 00:15:01.780 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:01.780 [2024-07-24 19:53:30.241709] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:01.780 [2024-07-24 19:53:30.241769] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74012 ] 00:15:01.780 [2024-07-24 19:53:30.377928] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:01.780 [2024-07-24 19:53:30.378013] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:01.780 [2024-07-24 19:53:30.378021] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:01.780 [2024-07-24 19:53:30.378035] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:01.780 [2024-07-24 19:53:30.378047] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:01.780 [2024-07-24 19:53:30.378193] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:01.780 [2024-07-24 19:53:30.378243] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x121d2c0 0 00:15:01.780 [2024-07-24 19:53:30.390762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:01.780 [2024-07-24 19:53:30.390790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:01.780 [2024-07-24 19:53:30.390797] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:01.780 [2024-07-24 19:53:30.390801] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:01.780 [2024-07-24 19:53:30.390852] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.390859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.390864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.780 [2024-07-24 19:53:30.390881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:01.780 [2024-07-24 19:53:30.390915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.780 [2024-07-24 19:53:30.398755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.780 [2024-07-24 19:53:30.398781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.780 [2024-07-24 19:53:30.398786] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.398792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.780 [2024-07-24 19:53:30.398808] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:01.780 [2024-07-24 19:53:30.398819] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:01.780 [2024-07-24 19:53:30.398826] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:01.780 [2024-07-24 19:53:30.398846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.398852] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.398856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.780 [2024-07-24 19:53:30.398867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.780 [2024-07-24 19:53:30.398897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.780 [2024-07-24 19:53:30.398958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.780 [2024-07-24 19:53:30.398966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.780 [2024-07-24 19:53:30.398970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.398979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.780 [2024-07-24 19:53:30.398985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:01.780 [2024-07-24 19:53:30.398993] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:01.780 [2024-07-24 19:53:30.399002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.780 [2024-07-24 19:53:30.399018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.780 [2024-07-24 19:53:30.399038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.780 [2024-07-24 19:53:30.399371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.780 [2024-07-24 19:53:30.399388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.780 [2024-07-24 19:53:30.399393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.780 [2024-07-24 19:53:30.399404] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:01.780 [2024-07-24 19:53:30.399414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.780 [2024-07-24 19:53:30.399422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.780 [2024-07-24 19:53:30.399439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.780 [2024-07-24 19:53:30.399460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.780 [2024-07-24 19:53:30.399509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.780 [2024-07-24 19:53:30.399516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.780 [2024-07-24 19:53:30.399520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.780 [2024-07-24 19:53:30.399530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.780 [2024-07-24 19:53:30.399541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.780 [2024-07-24 19:53:30.399557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.780 [2024-07-24 19:53:30.399576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.780 [2024-07-24 19:53:30.399963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.780 [2024-07-24 19:53:30.399987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.780 [2024-07-24 19:53:30.399992] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.780 [2024-07-24 19:53:30.399997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.780 [2024-07-24 19:53:30.400002] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:01.780 [2024-07-24 19:53:30.400008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:01.780 [2024-07-24 19:53:30.400017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.780 [2024-07-24 19:53:30.400123] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:01.781 [2024-07-24 19:53:30.400129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.781 [2024-07-24 19:53:30.400139] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.400143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.400148] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.400156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.781 [2024-07-24 19:53:30.400178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.781 [2024-07-24 19:53:30.400551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.781 [2024-07-24 19:53:30.400566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.781 [2024-07-24 19:53:30.400571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.400575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.781 [2024-07-24 19:53:30.400580] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.781 [2024-07-24 19:53:30.400592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.400597] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.400601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.400609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.781 [2024-07-24 19:53:30.400628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.781 [2024-07-24 19:53:30.400674] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.781 [2024-07-24 19:53:30.400682] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.781 [2024-07-24 19:53:30.400685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.400690] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.781 [2024-07-24 19:53:30.400694] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.781 [2024-07-24 19:53:30.400700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.400708] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:01.781 [2024-07-24 19:53:30.400719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.400730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.400735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.400756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.781 [2024-07-24 19:53:30.400778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.781 [2024-07-24 19:53:30.400968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.781 [2024-07-24 19:53:30.400975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.781 [2024-07-24 19:53:30.400979] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.400984] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121d2c0): datao=0, datal=4096, cccid=0 00:15:01.781 [2024-07-24 19:53:30.400989] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125e940) on tqpair(0x121d2c0): expected_datao=0, payload_size=4096 00:15:01.781 [2024-07-24 19:53:30.400994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401003] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401007] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401064] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.781 [2024-07-24 19:53:30.401071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.781 [2024-07-24 19:53:30.401075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.781 [2024-07-24 19:53:30.401089] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:01.781 [2024-07-24 19:53:30.401094] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:01.781 [2024-07-24 19:53:30.401099] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:01.781 [2024-07-24 19:53:30.401108] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:01.781 [2024-07-24 19:53:30.401114] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:01.781 [2024-07-24 19:53:30.401119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.401129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.401138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401143] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.401155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:01.781 [2024-07-24 19:53:30.401175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.781 [2024-07-24 19:53:30.401322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.781 [2024-07-24 19:53:30.401329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.781 [2024-07-24 19:53:30.401333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.781 [2024-07-24 19:53:30.401346] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.401361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.781 [2024-07-24 19:53:30.401369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401373] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.401384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.781 [2024-07-24 19:53:30.401390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.401405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.781 [2024-07-24 19:53:30.401412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.401426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.781 [2024-07-24 19:53:30.401431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.401441] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.401448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.401460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.781 [2024-07-24 19:53:30.401485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125e940, cid 0, qid 0 00:15:01.781 [2024-07-24 19:53:30.401493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125eac0, cid 1, qid 0 00:15:01.781 [2024-07-24 19:53:30.401498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ec40, cid 2, qid 0 00:15:01.781 [2024-07-24 19:53:30.401503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125edc0, cid 3, qid 0 00:15:01.781 [2024-07-24 19:53:30.401508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ef40, cid 4, qid 0 00:15:01.781 [2024-07-24 19:53:30.401890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.781 [2024-07-24 19:53:30.401907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.781 [2024-07-24 19:53:30.401911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ef40) on tqpair=0x121d2c0 00:15:01.781 [2024-07-24 19:53:30.401922] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:01.781 [2024-07-24 19:53:30.401927] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.401937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.401945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:01.781 [2024-07-24 19:53:30.401952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.781 [2024-07-24 19:53:30.401961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121d2c0) 00:15:01.781 [2024-07-24 19:53:30.401970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:01.781 [2024-07-24 19:53:30.401991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ef40, cid 4, qid 0 00:15:01.782 [2024-07-24 19:53:30.402046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.782 [2024-07-24 19:53:30.402053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.782 [2024-07-24 19:53:30.402057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.402062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ef40) on tqpair=0x121d2c0 00:15:01.782 [2024-07-24 19:53:30.402131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.402143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.402152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.402156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121d2c0) 00:15:01.782 [2024-07-24 19:53:30.402164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.782 [2024-07-24 19:53:30.402183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ef40, cid 4, qid 0 00:15:01.782 [2024-07-24 19:53:30.402470] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.782 [2024-07-24 19:53:30.402486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.782 [2024-07-24 19:53:30.402491] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.402495] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121d2c0): datao=0, datal=4096, cccid=4 00:15:01.782 [2024-07-24 19:53:30.402500] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125ef40) on tqpair(0x121d2c0): expected_datao=0, payload_size=4096 00:15:01.782 [2024-07-24 19:53:30.402505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.402513] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.402517] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.402526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.782 [2024-07-24 19:53:30.402532] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.782 [2024-07-24 19:53:30.402536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.402540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ef40) on tqpair=0x121d2c0 00:15:01.782 [2024-07-24 19:53:30.402552] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:01.782 [2024-07-24 19:53:30.402564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.402575] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.402584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.402588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121d2c0) 00:15:01.782 [2024-07-24 19:53:30.402596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.782 [2024-07-24 19:53:30.402617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ef40, cid 4, qid 0 00:15:01.782 [2024-07-24 19:53:30.406762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.782 [2024-07-24 19:53:30.406783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.782 [2024-07-24 19:53:30.406787] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.406791] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121d2c0): datao=0, datal=4096, cccid=4 00:15:01.782 [2024-07-24 19:53:30.406797] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125ef40) on tqpair(0x121d2c0): expected_datao=0, payload_size=4096 00:15:01.782 [2024-07-24 19:53:30.406802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.406810] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.406814] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.406820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.782 [2024-07-24 19:53:30.406826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.782 [2024-07-24 19:53:30.406830] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.406835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ef40) on tqpair=0x121d2c0 00:15:01.782 [2024-07-24 19:53:30.406855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.406868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.406879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.406884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121d2c0) 00:15:01.782 [2024-07-24 19:53:30.406893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.782 [2024-07-24 19:53:30.406922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ef40, cid 4, qid 0 00:15:01.782 [2024-07-24 19:53:30.407005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.782 [2024-07-24 19:53:30.407013] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.782 [2024-07-24 19:53:30.407017] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407021] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121d2c0): datao=0, datal=4096, cccid=4 00:15:01.782 [2024-07-24 19:53:30.407026] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125ef40) on tqpair(0x121d2c0): expected_datao=0, payload_size=4096 00:15:01.782 [2024-07-24 19:53:30.407031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407038] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407042] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.782 [2024-07-24 19:53:30.407057] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.782 [2024-07-24 19:53:30.407060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407065] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ef40) on tqpair=0x121d2c0 00:15:01.782 [2024-07-24 19:53:30.407075] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.407084] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.407095] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.407102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.407108] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.407114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.407120] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:01.782 [2024-07-24 19:53:30.407125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:01.782 [2024-07-24 19:53:30.407131] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:01.782 [2024-07-24 19:53:30.407151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121d2c0) 00:15:01.782 [2024-07-24 19:53:30.407163] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.782 [2024-07-24 19:53:30.407171] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121d2c0) 00:15:01.782 [2024-07-24 19:53:30.407186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.782 [2024-07-24 19:53:30.407212] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ef40, cid 4, qid 0 00:15:01.782 [2024-07-24 19:53:30.407219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f0c0, cid 5, qid 0 00:15:01.782 [2024-07-24 19:53:30.407679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.782 [2024-07-24 19:53:30.407696] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.782 [2024-07-24 19:53:30.407701] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ef40) on tqpair=0x121d2c0 00:15:01.782 [2024-07-24 19:53:30.407713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.782 [2024-07-24 19:53:30.407719] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.782 [2024-07-24 19:53:30.407723] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407727] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125f0c0) on tqpair=0x121d2c0 00:15:01.782 [2024-07-24 19:53:30.407750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121d2c0) 00:15:01.782 [2024-07-24 19:53:30.407765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.782 [2024-07-24 19:53:30.407786] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f0c0, cid 5, qid 0 00:15:01.782 [2024-07-24 19:53:30.407838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.782 [2024-07-24 19:53:30.407845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.782 [2024-07-24 19:53:30.407849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125f0c0) on tqpair=0x121d2c0 00:15:01.782 [2024-07-24 19:53:30.407864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.782 [2024-07-24 19:53:30.407869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121d2c0) 00:15:01.782 [2024-07-24 19:53:30.407876] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.782 [2024-07-24 19:53:30.407894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f0c0, cid 5, qid 0 00:15:01.782 [2024-07-24 19:53:30.408033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.782 [2024-07-24 19:53:30.408041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.782 [2024-07-24 19:53:30.408045] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.408049] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125f0c0) on tqpair=0x121d2c0 00:15:01.783 [2024-07-24 19:53:30.408060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.408065] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121d2c0) 00:15:01.783 [2024-07-24 19:53:30.408073] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.783 [2024-07-24 19:53:30.408091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f0c0, cid 5, qid 0 00:15:01.783 [2024-07-24 19:53:30.408389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.783 [2024-07-24 19:53:30.408404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.783 [2024-07-24 19:53:30.408409] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.408413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125f0c0) on tqpair=0x121d2c0 00:15:01.783 [2024-07-24 19:53:30.408435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.408440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x121d2c0) 00:15:01.783 [2024-07-24 19:53:30.408448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.783 [2024-07-24 19:53:30.408456] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.408461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x121d2c0) 00:15:01.783 [2024-07-24 19:53:30.408468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.783 [2024-07-24 19:53:30.408476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.408480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x121d2c0) 00:15:01.783 [2024-07-24 19:53:30.408487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.783 [2024-07-24 19:53:30.408495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.408499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x121d2c0) 00:15:01.783 [2024-07-24 19:53:30.408506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.783 [2024-07-24 19:53:30.408527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f0c0, cid 5, qid 0 00:15:01.783 [2024-07-24 19:53:30.408535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125ef40, cid 4, qid 0 00:15:01.783 [2024-07-24 19:53:30.408540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f240, cid 6, qid 0 00:15:01.783 [2024-07-24 19:53:30.408545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f3c0, cid 7, qid 0 00:15:01.783 [2024-07-24 19:53:30.409015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.783 [2024-07-24 19:53:30.409031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.783 [2024-07-24 19:53:30.409035] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409039] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121d2c0): datao=0, datal=8192, cccid=5 00:15:01.783 [2024-07-24 19:53:30.409044] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125f0c0) on tqpair(0x121d2c0): expected_datao=0, payload_size=8192 00:15:01.783 [2024-07-24 19:53:30.409049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409069] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409074] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.783 [2024-07-24 19:53:30.409086] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.783 [2024-07-24 19:53:30.409090] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409094] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121d2c0): datao=0, datal=512, cccid=4 00:15:01.783 [2024-07-24 19:53:30.409099] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125ef40) on tqpair(0x121d2c0): expected_datao=0, payload_size=512 00:15:01.783 [2024-07-24 19:53:30.409104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409110] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409114] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.783 [2024-07-24 19:53:30.409126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.783 [2024-07-24 19:53:30.409130] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409133] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121d2c0): datao=0, datal=512, cccid=6 00:15:01.783 [2024-07-24 19:53:30.409138] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125f240) on tqpair(0x121d2c0): expected_datao=0, payload_size=512 00:15:01.783 [2024-07-24 19:53:30.409142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409149] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409152] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:01.783 [2024-07-24 19:53:30.409164] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:01.783 [2024-07-24 19:53:30.409168] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409172] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x121d2c0): datao=0, datal=4096, cccid=7 00:15:01.783 [2024-07-24 19:53:30.409176] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x125f3c0) on tqpair(0x121d2c0): expected_datao=0, payload_size=4096 00:15:01.783 [2024-07-24 19:53:30.409181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409188] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409191] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.783 [2024-07-24 19:53:30.409203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.783 [2024-07-24 19:53:30.409207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.783 [2024-07-24 19:53:30.409211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125f0c0) on tqpair=0x121d2c0 00:15:01.783 [2024-07-24 19:53:30.409232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.783 [2024-07-24 19:53:30.409240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.783 ===================================================== 00:15:01.783 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.783 ===================================================== 00:15:01.783 Controller Capabilities/Features 00:15:01.783 ================================ 00:15:01.783 Vendor ID: 8086 00:15:01.783 Subsystem Vendor ID: 8086 00:15:01.783 Serial Number: SPDK00000000000001 00:15:01.783 Model Number: SPDK bdev Controller 00:15:01.783 Firmware Version: 24.09 00:15:01.783 Recommended Arb Burst: 6 00:15:01.783 IEEE OUI Identifier: e4 d2 5c 00:15:01.783 Multi-path I/O 00:15:01.783 May have multiple subsystem ports: Yes 00:15:01.783 May have multiple controllers: Yes 00:15:01.783 Associated with SR-IOV VF: No 00:15:01.783 Max Data Transfer Size: 131072 00:15:01.783 Max Number of Namespaces: 32 00:15:01.783 Max Number of I/O Queues: 127 00:15:01.783 NVMe Specification Version (VS): 1.3 00:15:01.783 NVMe Specification Version (Identify): 1.3 00:15:01.783 Maximum Queue Entries: 128 00:15:01.783 Contiguous Queues Required: Yes 00:15:01.783 Arbitration Mechanisms Supported 00:15:01.783 Weighted Round Robin: Not Supported 00:15:01.783 Vendor Specific: Not Supported 00:15:01.783 Reset Timeout: 15000 ms 00:15:01.783 Doorbell Stride: 4 bytes 00:15:01.783 NVM Subsystem Reset: Not Supported 00:15:01.783 Command Sets Supported 00:15:01.783 NVM Command Set: Supported 00:15:01.783 Boot Partition: Not Supported 00:15:01.783 Memory Page Size Minimum: 4096 bytes 00:15:01.783 Memory Page Size Maximum: 4096 bytes 00:15:01.783 Persistent Memory Region: Not Supported 00:15:01.783 Optional Asynchronous Events Supported 00:15:01.783 Namespace Attribute Notices: Supported 00:15:01.783 Firmware Activation Notices: Not Supported 00:15:01.783 ANA Change Notices: Not Supported 00:15:01.783 PLE Aggregate Log Change Notices: Not Supported 00:15:01.783 LBA Status Info Alert Notices: Not Supported 00:15:01.783 EGE Aggregate Log Change Notices: Not Supported 00:15:01.783 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.783 Zone Descriptor Change Notices: Not Supported 00:15:01.783 Discovery Log Change Notices: Not Supported 00:15:01.783 Controller Attributes 00:15:01.783 128-bit Host Identifier: Supported 00:15:01.783 Non-Operational Permissive Mode: Not Supported 00:15:01.783 NVM Sets: Not Supported 00:15:01.783 Read Recovery Levels: Not Supported 00:15:01.783 Endurance Groups: Not Supported 00:15:01.783 Predictable Latency Mode: Not Supported 00:15:01.783 Traffic Based Keep ALive: Not Supported 00:15:01.783 Namespace Granularity: Not Supported 00:15:01.783 SQ Associations: Not Supported 00:15:01.783 UUID List: Not Supported 00:15:01.783 Multi-Domain Subsystem: Not Supported 00:15:01.783 Fixed Capacity Management: Not Supported 00:15:01.783 Variable Capacity Management: Not Supported 00:15:01.783 Delete Endurance Group: Not Supported 00:15:01.783 Delete NVM Set: Not Supported 00:15:01.783 Extended LBA Formats Supported: Not Supported 00:15:01.783 Flexible Data Placement Supported: Not Supported 00:15:01.783 00:15:01.783 Controller Memory Buffer Support 00:15:01.783 ================================ 00:15:01.783 Supported: No 00:15:01.783 00:15:01.783 Persistent Memory Region Support 00:15:01.783 ================================ 00:15:01.783 Supported: No 00:15:01.783 00:15:01.783 Admin Command Set Attributes 00:15:01.783 ============================ 00:15:01.784 Security Send/Receive: Not Supported 00:15:01.784 Format NVM: Not Supported 00:15:01.784 Firmware Activate/Download: Not Supported 00:15:01.784 Namespace Management: Not Supported 00:15:01.784 Device Self-Test: Not Supported 00:15:01.784 Directives: Not Supported 00:15:01.784 NVMe-MI: Not Supported 00:15:01.784 Virtualization Management: Not Supported 00:15:01.784 Doorbell Buffer Config: Not Supported 00:15:01.784 Get LBA Status Capability: Not Supported 00:15:01.784 Command & Feature Lockdown Capability: Not Supported 00:15:01.784 Abort Command Limit: 4 00:15:01.784 Async Event Request Limit: 4 00:15:01.784 Number of Firmware Slots: N/A 00:15:01.784 Firmware Slot 1 Read-Only: N/A 00:15:01.784 Firmware Activation Without Reset: N/A 00:15:01.784 Multiple Update Detection Support: N/A 00:15:01.784 Firmware Update Granularity: No Information Provided 00:15:01.784 Per-Namespace SMART Log: No 00:15:01.784 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.784 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:01.784 Command Effects Log Page: Supported 00:15:01.784 Get Log Page Extended Data: Supported 00:15:01.784 Telemetry Log Pages: Not Supported 00:15:01.784 Persistent Event Log Pages: Not Supported 00:15:01.784 Supported Log Pages Log Page: May Support 00:15:01.784 Commands Supported & Effects Log Page: Not Supported 00:15:01.784 Feature Identifiers & Effects Log Page:May Support 00:15:01.784 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.784 Data Area 4 for Telemetry Log: Not Supported 00:15:01.784 Error Log Page Entries Supported: 128 00:15:01.784 Keep Alive: Supported 00:15:01.784 Keep Alive Granularity: 10000 ms 00:15:01.784 00:15:01.784 NVM Command Set Attributes 00:15:01.784 ========================== 00:15:01.784 Submission Queue Entry Size 00:15:01.784 Max: 64 00:15:01.784 Min: 64 00:15:01.784 Completion Queue Entry Size 00:15:01.784 Max: 16 00:15:01.784 Min: 16 00:15:01.784 Number of Namespaces: 32 00:15:01.784 Compare Command: Supported 00:15:01.784 Write Uncorrectable Command: Not Supported 00:15:01.784 Dataset Management Command: Supported 00:15:01.784 Write Zeroes Command: Supported 00:15:01.784 Set Features Save Field: Not Supported 00:15:01.784 Reservations: Supported 00:15:01.784 Timestamp: Not Supported 00:15:01.784 Copy: Supported 00:15:01.784 Volatile Write Cache: Present 00:15:01.784 Atomic Write Unit (Normal): 1 00:15:01.784 Atomic Write Unit (PFail): 1 00:15:01.784 Atomic Compare & Write Unit: 1 00:15:01.784 Fused Compare & Write: Supported 00:15:01.784 Scatter-Gather List 00:15:01.784 SGL Command Set: Supported 00:15:01.784 SGL Keyed: Supported 00:15:01.784 SGL Bit Bucket Descriptor: Not Supported 00:15:01.784 SGL Metadata Pointer: Not Supported 00:15:01.784 Oversized SGL: Not Supported 00:15:01.784 SGL Metadata Address: Not Supported 00:15:01.784 SGL Offset: Supported 00:15:01.784 Transport SGL Data Block: Not Supported 00:15:01.784 Replay Protected Memory Block: Not Supported 00:15:01.784 00:15:01.784 Firmware Slot Information 00:15:01.784 ========================= 00:15:01.784 Active slot: 1 00:15:01.784 Slot 1 Firmware Revision: 24.09 00:15:01.784 00:15:01.784 00:15:01.784 Commands Supported and Effects 00:15:01.784 ============================== 00:15:01.784 Admin Commands 00:15:01.784 -------------- 00:15:01.784 Get Log Page (02h): Supported 00:15:01.784 Identify (06h): Supported 00:15:01.784 Abort (08h): Supported 00:15:01.784 Set Features (09h): Supported 00:15:01.784 Get Features (0Ah): Supported 00:15:01.784 Asynchronous Event Request (0Ch): Supported 00:15:01.784 Keep Alive (18h): Supported 00:15:01.784 I/O Commands 00:15:01.784 ------------ 00:15:01.784 Flush (00h): Supported LBA-Change 00:15:01.784 Write (01h): Supported LBA-Change 00:15:01.784 Read (02h): Supported 00:15:01.784 Compare (05h): Supported 00:15:01.784 Write Zeroes (08h): Supported LBA-Change 00:15:01.784 Dataset Management (09h): Supported LBA-Change 00:15:01.784 Copy (19h): Supported LBA-Change 00:15:01.784 00:15:01.784 Error Log 00:15:01.784 ========= 00:15:01.784 00:15:01.784 Arbitration 00:15:01.784 =========== 00:15:01.784 Arbitration Burst: 1 00:15:01.784 00:15:01.784 Power Management 00:15:01.784 ================ 00:15:01.784 Number of Power States: 1 00:15:01.784 Current Power State: Power State #0 00:15:01.784 Power State #0: 00:15:01.784 Max Power: 0.00 W 00:15:01.784 Non-Operational State: Operational 00:15:01.784 Entry Latency: Not Reported 00:15:01.784 Exit Latency: Not Reported 00:15:01.784 Relative Read Throughput: 0 00:15:01.784 Relative Read Latency: 0 00:15:01.784 Relative Write Throughput: 0 00:15:01.784 Relative Write Latency: 0 00:15:01.784 Idle Power: Not Reported 00:15:01.784 Active Power: Not Reported 00:15:01.784 Non-Operational Permissive Mode: Not Supported 00:15:01.784 00:15:01.784 Health Information 00:15:01.784 ================== 00:15:01.784 Critical Warnings: 00:15:01.784 Available Spare Space: OK 00:15:01.784 Temperature: OK 00:15:01.784 Device Reliability: OK 00:15:01.784 Read Only: No 00:15:01.784 Volatile Memory Backup: OK 00:15:01.784 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:01.784 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:01.784 Available Spare: 0% 00:15:01.784 Available Spare Threshold: 0% 00:15:01.784 Life Percentage Used:[2024-07-24 19:53:30.409243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.784 [2024-07-24 19:53:30.409248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ef40) on tqpair=0x121d2c0 00:15:01.784 [2024-07-24 19:53:30.409261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.784 [2024-07-24 19:53:30.409267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.784 [2024-07-24 19:53:30.409271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.784 [2024-07-24 19:53:30.409275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125f240) on tqpair=0x121d2c0 00:15:01.784 [2024-07-24 19:53:30.409283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.784 [2024-07-24 19:53:30.409289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.784 [2024-07-24 19:53:30.409292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.784 [2024-07-24 19:53:30.409296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125f3c0) on tqpair=0x121d2c0 00:15:01.784 [2024-07-24 19:53:30.409412] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.784 [2024-07-24 19:53:30.409420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x121d2c0) 00:15:01.784 [2024-07-24 19:53:30.409428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.784 [2024-07-24 19:53:30.409453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125f3c0, cid 7, qid 0 00:15:01.784 [2024-07-24 19:53:30.410063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.784 [2024-07-24 19:53:30.410080] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.784 [2024-07-24 19:53:30.410085] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.784 [2024-07-24 19:53:30.410089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125f3c0) on tqpair=0x121d2c0 00:15:01.784 [2024-07-24 19:53:30.410134] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:01.784 [2024-07-24 19:53:30.410146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125e940) on tqpair=0x121d2c0 00:15:01.784 [2024-07-24 19:53:30.410154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.784 [2024-07-24 19:53:30.410160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125eac0) on tqpair=0x121d2c0 00:15:01.785 [2024-07-24 19:53:30.410165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.785 [2024-07-24 19:53:30.410171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125ec40) on tqpair=0x121d2c0 00:15:01.785 [2024-07-24 19:53:30.410175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.785 [2024-07-24 19:53:30.410181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125edc0) on tqpair=0x121d2c0 00:15:01.785 [2024-07-24 19:53:30.410186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.785 [2024-07-24 19:53:30.410196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.410201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.410205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121d2c0) 00:15:01.785 [2024-07-24 19:53:30.410214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.785 [2024-07-24 19:53:30.410238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125edc0, cid 3, qid 0 00:15:01.785 [2024-07-24 19:53:30.410357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.785 [2024-07-24 19:53:30.410364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.785 [2024-07-24 19:53:30.410368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.410372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125edc0) on tqpair=0x121d2c0 00:15:01.785 [2024-07-24 19:53:30.410381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.410385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.410389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121d2c0) 00:15:01.785 [2024-07-24 19:53:30.410397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.785 [2024-07-24 19:53:30.410419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125edc0, cid 3, qid 0 00:15:01.785 [2024-07-24 19:53:30.414763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.785 [2024-07-24 19:53:30.414787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.785 [2024-07-24 19:53:30.414793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.414798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125edc0) on tqpair=0x121d2c0 00:15:01.785 [2024-07-24 19:53:30.414804] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:01.785 [2024-07-24 19:53:30.414810] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:01.785 [2024-07-24 19:53:30.414823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.414829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.414833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x121d2c0) 00:15:01.785 [2024-07-24 19:53:30.414843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:01.785 [2024-07-24 19:53:30.414869] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x125edc0, cid 3, qid 0 00:15:01.785 [2024-07-24 19:53:30.414923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:01.785 [2024-07-24 19:53:30.414930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:01.785 [2024-07-24 19:53:30.414934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:01.785 [2024-07-24 19:53:30.414938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x125edc0) on tqpair=0x121d2c0 00:15:01.785 [2024-07-24 19:53:30.414948] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:15:01.785 0% 00:15:01.785 Data Units Read: 0 00:15:01.785 Data Units Written: 0 00:15:01.785 Host Read Commands: 0 00:15:01.785 Host Write Commands: 0 00:15:01.785 Controller Busy Time: 0 minutes 00:15:01.785 Power Cycles: 0 00:15:01.785 Power On Hours: 0 hours 00:15:01.785 Unsafe Shutdowns: 0 00:15:01.785 Unrecoverable Media Errors: 0 00:15:01.785 Lifetime Error Log Entries: 0 00:15:01.785 Warning Temperature Time: 0 minutes 00:15:01.785 Critical Temperature Time: 0 minutes 00:15:01.785 00:15:01.785 Number of Queues 00:15:01.785 ================ 00:15:01.785 Number of I/O Submission Queues: 127 00:15:01.785 Number of I/O Completion Queues: 127 00:15:01.785 00:15:01.785 Active Namespaces 00:15:01.785 ================= 00:15:01.785 Namespace ID:1 00:15:01.785 Error Recovery Timeout: Unlimited 00:15:01.785 Command Set Identifier: NVM (00h) 00:15:01.785 Deallocate: Supported 00:15:01.785 Deallocated/Unwritten Error: Not Supported 00:15:01.785 Deallocated Read Value: Unknown 00:15:01.785 Deallocate in Write Zeroes: Not Supported 00:15:01.785 Deallocated Guard Field: 0xFFFF 00:15:01.785 Flush: Supported 00:15:01.785 Reservation: Supported 00:15:01.785 Namespace Sharing Capabilities: Multiple Controllers 00:15:01.785 Size (in LBAs): 131072 (0GiB) 00:15:01.785 Capacity (in LBAs): 131072 (0GiB) 00:15:01.785 Utilization (in LBAs): 131072 (0GiB) 00:15:01.785 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:01.785 EUI64: ABCDEF0123456789 00:15:01.785 UUID: 682b3e74-6d2b-44cc-9bb7-2582e803d878 00:15:01.785 Thin Provisioning: Not Supported 00:15:01.785 Per-NS Atomic Units: Yes 00:15:01.785 Atomic Boundary Size (Normal): 0 00:15:01.785 Atomic Boundary Size (PFail): 0 00:15:01.785 Atomic Boundary Offset: 0 00:15:01.785 Maximum Single Source Range Length: 65535 00:15:01.785 Maximum Copy Length: 65535 00:15:01.785 Maximum Source Range Count: 1 00:15:01.785 NGUID/EUI64 Never Reused: No 00:15:01.785 Namespace Write Protected: No 00:15:01.785 Number of LBA Formats: 1 00:15:01.785 Current LBA Format: LBA Format #00 00:15:01.785 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:01.785 00:15:01.785 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.045 rmmod nvme_tcp 00:15:02.045 rmmod nvme_fabrics 00:15:02.045 rmmod nvme_keyring 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 73975 ']' 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 73975 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 73975 ']' 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 73975 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73975 00:15:02.045 killing process with pid 73975 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73975' 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 73975 00:15:02.045 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 73975 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:02.304 ************************************ 00:15:02.304 END TEST nvmf_identify 00:15:02.304 ************************************ 00:15:02.304 00:15:02.304 real 0m2.401s 00:15:02.304 user 0m6.489s 00:15:02.304 sys 0m0.635s 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:02.304 ************************************ 00:15:02.304 START TEST nvmf_perf 00:15:02.304 ************************************ 00:15:02.304 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:02.564 * Looking for test storage... 00:15:02.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:02.564 19:53:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:02.564 Cannot find device "nvmf_tgt_br" 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:02.564 Cannot find device "nvmf_tgt_br2" 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:02.564 Cannot find device "nvmf_tgt_br" 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:02.564 Cannot find device "nvmf_tgt_br2" 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:02.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:02.564 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:02.564 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:02.565 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:02.565 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:02.565 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:02.823 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:02.823 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:02.823 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:02.823 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:02.823 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:02.823 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:02.823 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:02.823 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:02.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:15:02.824 00:15:02.824 --- 10.0.0.2 ping statistics --- 00:15:02.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.824 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:02.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:02.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:02.824 00:15:02.824 --- 10.0.0.3 ping statistics --- 00:15:02.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.824 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:02.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:02.824 00:15:02.824 --- 10.0.0.1 ping statistics --- 00:15:02.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.824 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:02.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=74182 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 74182 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 74182 ']' 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.824 19:53:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:02.824 [2024-07-24 19:53:31.413111] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:02.824 [2024-07-24 19:53:31.413342] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.083 [2024-07-24 19:53:31.548232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.083 [2024-07-24 19:53:31.699789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.083 [2024-07-24 19:53:31.700068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.083 [2024-07-24 19:53:31.700279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.083 [2024-07-24 19:53:31.700484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.083 [2024-07-24 19:53:31.700595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.083 [2024-07-24 19:53:31.700840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.083 [2024-07-24 19:53:31.701048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.083 [2024-07-24 19:53:31.701051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.083 [2024-07-24 19:53:31.700958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.342 [2024-07-24 19:53:31.758141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:03.907 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.907 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:15:03.907 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.907 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:03.907 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:03.907 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.907 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:03.907 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:04.165 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:04.165 19:53:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:04.731 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:04.731 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:04.731 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:04.731 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:04.731 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:04.731 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:04.731 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:04.989 [2024-07-24 19:53:33.583040] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.989 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.248 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:05.248 19:53:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.506 19:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:05.506 19:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:05.765 19:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.024 [2024-07-24 19:53:34.636464] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.024 19:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.283 19:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:06.283 19:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:06.283 19:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:06.283 19:53:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:07.661 Initializing NVMe Controllers 00:15:07.661 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:07.661 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:07.661 Initialization complete. Launching workers. 00:15:07.661 ======================================================== 00:15:07.661 Latency(us) 00:15:07.661 Device Information : IOPS MiB/s Average min max 00:15:07.661 PCIE (0000:00:10.0) NSID 1 from core 0: 23914.46 93.42 1338.24 363.59 8912.15 00:15:07.661 ======================================================== 00:15:07.661 Total : 23914.46 93.42 1338.24 363.59 8912.15 00:15:07.661 00:15:07.661 19:53:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:08.599 Initializing NVMe Controllers 00:15:08.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:08.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:08.599 Initialization complete. Launching workers. 00:15:08.599 ======================================================== 00:15:08.599 Latency(us) 00:15:08.599 Device Information : IOPS MiB/s Average min max 00:15:08.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3772.85 14.74 264.74 103.28 6194.94 00:15:08.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.99 0.48 8128.35 5404.75 12044.03 00:15:08.599 ======================================================== 00:15:08.599 Total : 3896.84 15.22 514.96 103.28 12044.03 00:15:08.599 00:15:08.857 19:53:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:10.234 Initializing NVMe Controllers 00:15:10.234 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:10.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:10.234 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:10.234 Initialization complete. Launching workers. 00:15:10.234 ======================================================== 00:15:10.234 Latency(us) 00:15:10.234 Device Information : IOPS MiB/s Average min max 00:15:10.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8221.01 32.11 3892.71 569.32 9705.70 00:15:10.234 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3966.46 15.49 8080.28 4829.44 15847.03 00:15:10.234 ======================================================== 00:15:10.234 Total : 12187.48 47.61 5255.57 569.32 15847.03 00:15:10.234 00:15:10.234 19:53:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:10.234 19:53:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:12.784 Initializing NVMe Controllers 00:15:12.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.784 Controller IO queue size 128, less than required. 00:15:12.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.784 Controller IO queue size 128, less than required. 00:15:12.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:12.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:12.784 Initialization complete. Launching workers. 00:15:12.784 ======================================================== 00:15:12.784 Latency(us) 00:15:12.784 Device Information : IOPS MiB/s Average min max 00:15:12.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1628.52 407.13 79964.40 48797.31 112167.14 00:15:12.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 660.09 165.02 197896.80 78391.65 308450.73 00:15:12.784 ======================================================== 00:15:12.784 Total : 2288.61 572.15 113978.91 48797.31 308450.73 00:15:12.784 00:15:12.784 19:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:12.784 Initializing NVMe Controllers 00:15:12.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.784 Controller IO queue size 128, less than required. 00:15:12.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.784 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:12.784 Controller IO queue size 128, less than required. 00:15:12.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.784 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:12.784 WARNING: Some requested NVMe devices were skipped 00:15:12.784 No valid NVMe controllers or AIO or URING devices found 00:15:12.784 19:53:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:15.383 Initializing NVMe Controllers 00:15:15.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:15.383 Controller IO queue size 128, less than required. 00:15:15.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:15.383 Controller IO queue size 128, less than required. 00:15:15.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:15.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:15.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:15.383 Initialization complete. Launching workers. 00:15:15.383 00:15:15.383 ==================== 00:15:15.383 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:15.383 TCP transport: 00:15:15.383 polls: 9284 00:15:15.383 idle_polls: 5820 00:15:15.383 sock_completions: 3464 00:15:15.383 nvme_completions: 6235 00:15:15.383 submitted_requests: 9320 00:15:15.383 queued_requests: 1 00:15:15.383 00:15:15.383 ==================== 00:15:15.383 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:15.383 TCP transport: 00:15:15.383 polls: 11958 00:15:15.383 idle_polls: 8104 00:15:15.383 sock_completions: 3854 00:15:15.383 nvme_completions: 6765 00:15:15.383 submitted_requests: 10192 00:15:15.383 queued_requests: 1 00:15:15.383 ======================================================== 00:15:15.383 Latency(us) 00:15:15.383 Device Information : IOPS MiB/s Average min max 00:15:15.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1555.95 388.99 84012.26 42396.41 147305.88 00:15:15.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1688.23 422.06 75984.67 35179.76 134485.22 00:15:15.383 ======================================================== 00:15:15.383 Total : 3244.17 811.04 79834.80 35179.76 147305.88 00:15:15.383 00:15:15.383 19:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:15.383 19:53:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.642 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.642 rmmod nvme_tcp 00:15:15.642 rmmod nvme_fabrics 00:15:15.642 rmmod nvme_keyring 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 74182 ']' 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 74182 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 74182 ']' 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 74182 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74182 00:15:15.643 killing process with pid 74182 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74182' 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 74182 00:15:15.643 19:53:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 74182 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:16.579 ************************************ 00:15:16.579 END TEST nvmf_perf 00:15:16.579 ************************************ 00:15:16.579 00:15:16.579 real 0m14.181s 00:15:16.579 user 0m52.024s 00:15:16.579 sys 0m4.018s 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:16.579 ************************************ 00:15:16.579 START TEST nvmf_fio_host 00:15:16.579 ************************************ 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:16.579 * Looking for test storage... 00:15:16.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.579 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.580 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:16.839 Cannot find device "nvmf_tgt_br" 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.839 Cannot find device "nvmf_tgt_br2" 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:16.839 Cannot find device "nvmf_tgt_br" 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:16.839 Cannot find device "nvmf_tgt_br2" 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:16.839 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:17.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:17.099 00:15:17.099 --- 10.0.0.2 ping statistics --- 00:15:17.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.099 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:17.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:17.099 00:15:17.099 --- 10.0.0.3 ping statistics --- 00:15:17.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.099 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:17.099 00:15:17.099 --- 10.0.0.1 ping statistics --- 00:15:17.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.099 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74591 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74591 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 74591 ']' 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.099 19:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 [2024-07-24 19:53:45.642167] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:17.099 [2024-07-24 19:53:45.642261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.358 [2024-07-24 19:53:45.783394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.358 [2024-07-24 19:53:45.902931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.358 [2024-07-24 19:53:45.903334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.358 [2024-07-24 19:53:45.903369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.358 [2024-07-24 19:53:45.903378] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.358 [2024-07-24 19:53:45.903385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.358 [2024-07-24 19:53:45.903535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.358 [2024-07-24 19:53:45.903807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.358 [2024-07-24 19:53:45.904221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.358 [2024-07-24 19:53:45.904268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.358 [2024-07-24 19:53:45.958915] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:18.294 19:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:18.294 19:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:15:18.294 19:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:18.294 [2024-07-24 19:53:46.873591] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:18.294 19:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:18.294 19:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:18.294 19:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:18.294 19:53:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:18.553 Malloc1 00:15:18.553 19:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:18.812 19:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:19.379 19:53:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.379 [2024-07-24 19:53:47.996334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.379 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:19.638 19:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:19.897 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:19.897 fio-3.35 00:15:19.897 Starting 1 thread 00:15:22.455 00:15:22.455 test: (groupid=0, jobs=1): err= 0: pid=74674: Wed Jul 24 19:53:50 2024 00:15:22.455 read: IOPS=8866, BW=34.6MiB/s (36.3MB/s)(69.5MiB/2007msec) 00:15:22.455 slat (usec): min=2, max=334, avg= 2.49, stdev= 3.26 00:15:22.455 clat (usec): min=2614, max=13778, avg=7514.19, stdev=523.24 00:15:22.455 lat (usec): min=2658, max=13780, avg=7516.69, stdev=522.97 00:15:22.455 clat percentiles (usec): 00:15:22.455 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7111], 00:15:22.455 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7635], 00:15:22.455 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8094], 95.00th=[ 8291], 00:15:22.455 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[11600], 99.95th=[12780], 00:15:22.455 | 99.99th=[13698] 00:15:22.455 bw ( KiB/s): min=34816, max=35720, per=99.96%, avg=35452.00, stdev=426.86, samples=4 00:15:22.455 iops : min= 8704, max= 8930, avg=8863.00, stdev=106.71, samples=4 00:15:22.455 write: IOPS=8878, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2007msec); 0 zone resets 00:15:22.455 slat (usec): min=2, max=268, avg= 2.61, stdev= 2.21 00:15:22.455 clat (usec): min=2447, max=12927, avg=6849.43, stdev=474.49 00:15:22.455 lat (usec): min=2461, max=12929, avg=6852.04, stdev=474.40 00:15:22.455 clat percentiles (usec): 00:15:22.455 | 1.00th=[ 5866], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:15:22.455 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6915], 00:15:22.455 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7504], 00:15:22.455 | 99.00th=[ 7963], 99.50th=[ 8356], 99.90th=[11207], 99.95th=[11994], 00:15:22.455 | 99.99th=[12911] 00:15:22.455 bw ( KiB/s): min=34944, max=35776, per=100.00%, avg=35522.00, stdev=388.44, samples=4 00:15:22.455 iops : min= 8736, max= 8944, avg=8880.50, stdev=97.11, samples=4 00:15:22.455 lat (msec) : 4=0.08%, 10=99.75%, 20=0.17% 00:15:22.455 cpu : usr=71.04%, sys=21.78%, ctx=7, majf=0, minf=7 00:15:22.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:22.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.455 issued rwts: total=17795,17819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.455 00:15:22.455 Run status group 0 (all jobs): 00:15:22.455 READ: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.5MiB (72.9MB), run=2007-2007msec 00:15:22.455 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2007-2007msec 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:22.455 19:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:22.455 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:22.455 fio-3.35 00:15:22.455 Starting 1 thread 00:15:24.988 00:15:24.988 test: (groupid=0, jobs=1): err= 0: pid=74718: Wed Jul 24 19:53:53 2024 00:15:24.988 read: IOPS=7806, BW=122MiB/s (128MB/s)(245MiB/2009msec) 00:15:24.988 slat (usec): min=3, max=118, avg= 3.87, stdev= 1.76 00:15:24.988 clat (usec): min=1722, max=18362, avg=9113.91, stdev=2693.36 00:15:24.988 lat (usec): min=1725, max=18366, avg=9117.78, stdev=2693.40 00:15:24.988 clat percentiles (usec): 00:15:24.988 | 1.00th=[ 4424], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6652], 00:15:24.988 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8848], 60.00th=[ 9634], 00:15:24.988 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12780], 95.00th=[14091], 00:15:24.988 | 99.00th=[16057], 99.50th=[16909], 99.90th=[18220], 99.95th=[18220], 00:15:24.988 | 99.99th=[18482] 00:15:24.988 bw ( KiB/s): min=58208, max=67840, per=51.61%, avg=64464.00, stdev=4269.85, samples=4 00:15:24.988 iops : min= 3638, max= 4240, avg=4029.00, stdev=266.87, samples=4 00:15:24.988 write: IOPS=4606, BW=72.0MiB/s (75.5MB/s)(132MiB/1837msec); 0 zone resets 00:15:24.988 slat (usec): min=36, max=359, avg=39.64, stdev= 7.88 00:15:24.988 clat (usec): min=4338, max=23621, avg=12696.31, stdev=2445.88 00:15:24.988 lat (usec): min=4374, max=23658, avg=12735.94, stdev=2446.41 00:15:24.988 clat percentiles (usec): 00:15:24.988 | 1.00th=[ 7963], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10683], 00:15:24.988 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12387], 60.00th=[13042], 00:15:24.988 | 70.00th=[13829], 80.00th=[14746], 90.00th=[16188], 95.00th=[17171], 00:15:24.988 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20055], 99.95th=[22938], 00:15:24.988 | 99.99th=[23725] 00:15:24.988 bw ( KiB/s): min=61248, max=70528, per=91.11%, avg=67152.00, stdev=4070.13, samples=4 00:15:24.988 iops : min= 3828, max= 4408, avg=4197.00, stdev=254.38, samples=4 00:15:24.988 lat (msec) : 2=0.02%, 4=0.17%, 10=45.24%, 20=54.53%, 50=0.04% 00:15:24.988 cpu : usr=81.42%, sys=14.39%, ctx=23, majf=0, minf=12 00:15:24.988 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:24.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.988 issued rwts: total=15683,8462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.988 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.988 00:15:24.988 Run status group 0 (all jobs): 00:15:24.988 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=245MiB (257MB), run=2009-2009msec 00:15:24.988 WRITE: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=132MiB (139MB), run=1837-1837msec 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.988 rmmod nvme_tcp 00:15:24.988 rmmod nvme_fabrics 00:15:24.988 rmmod nvme_keyring 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 74591 ']' 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 74591 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 74591 ']' 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 74591 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74591 00:15:24.988 killing process with pid 74591 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74591' 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 74591 00:15:24.988 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 74591 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.247 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:25.506 ************************************ 00:15:25.506 END TEST nvmf_fio_host 00:15:25.506 ************************************ 00:15:25.506 00:15:25.506 real 0m8.784s 00:15:25.506 user 0m36.036s 00:15:25.506 sys 0m2.349s 00:15:25.506 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.506 19:53:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.506 19:53:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:25.506 19:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:25.506 19:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.506 19:53:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:25.506 ************************************ 00:15:25.506 START TEST nvmf_failover 00:15:25.506 ************************************ 00:15:25.506 19:53:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:25.506 * Looking for test storage... 00:15:25.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.506 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:25.507 Cannot find device "nvmf_tgt_br" 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:25.507 Cannot find device "nvmf_tgt_br2" 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:25.507 Cannot find device "nvmf_tgt_br" 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:25.507 Cannot find device "nvmf_tgt_br2" 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:25.507 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:25.765 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:25.765 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:25.765 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.765 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:25.765 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:25.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:25.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:15:25.766 00:15:25.766 --- 10.0.0.2 ping statistics --- 00:15:25.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.766 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:25.766 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.766 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:25.766 00:15:25.766 --- 10.0.0.3 ping statistics --- 00:15:25.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.766 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:25.766 00:15:25.766 --- 10.0.0.1 ping statistics --- 00:15:25.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.766 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:25.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=74936 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 74936 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74936 ']' 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.766 19:53:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:26.025 [2024-07-24 19:53:54.476150] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:26.025 [2024-07-24 19:53:54.476543] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.025 [2024-07-24 19:53:54.619789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.285 [2024-07-24 19:53:54.736902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.285 [2024-07-24 19:53:54.737221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.285 [2024-07-24 19:53:54.737240] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.285 [2024-07-24 19:53:54.737249] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.285 [2024-07-24 19:53:54.737256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.285 [2024-07-24 19:53:54.737603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.285 [2024-07-24 19:53:54.737752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.285 [2024-07-24 19:53:54.737755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.285 [2024-07-24 19:53:54.792033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:26.852 19:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.852 19:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:26.852 19:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:26.852 19:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.852 19:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:26.852 19:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.852 19:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:27.110 [2024-07-24 19:53:55.722979] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:27.110 19:53:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:27.368 Malloc0 00:15:27.625 19:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.625 19:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.883 19:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:28.141 [2024-07-24 19:53:56.690568] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.141 19:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:28.400 [2024-07-24 19:53:56.922782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:28.400 19:53:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:28.659 [2024-07-24 19:53:57.154947] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=74988 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 74988 /var/tmp/bdevperf.sock 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 74988 ']' 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.659 19:53:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:29.595 19:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.595 19:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:29.595 19:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:29.853 NVMe0n1 00:15:29.853 19:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:30.420 00:15:30.420 19:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75016 00:15:30.420 19:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.420 19:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:31.357 19:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.616 [2024-07-24 19:54:00.107429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.107999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.108007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.108015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.108032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.108041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.108054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.108062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.616 [2024-07-24 19:54:00.108070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 [2024-07-24 19:54:00.108976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1a70 is same with the state(5) to be set 00:15:31.617 19:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:34.899 19:54:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.899 00:15:34.899 19:54:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:35.157 19:54:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:38.482 19:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.482 [2024-07-24 19:54:07.040840] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.482 19:54:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:39.419 19:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:39.677 [2024-07-24 19:54:08.291981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506250 is same with the state(5) to be set 00:15:39.677 [2024-07-24 19:54:08.292049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506250 is same with the state(5) to be set 00:15:39.677 [2024-07-24 19:54:08.292062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506250 is same with the state(5) to be set 00:15:39.677 [2024-07-24 19:54:08.292071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1506250 is same with the state(5) to be set 00:15:39.677 19:54:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75016 00:15:46.245 0 00:15:46.245 19:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 74988 00:15:46.245 19:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74988 ']' 00:15:46.245 19:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74988 00:15:46.245 19:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:46.245 19:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:46.245 19:54:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74988 00:15:46.245 killing process with pid 74988 00:15:46.245 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:46.245 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:46.245 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74988' 00:15:46.245 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74988 00:15:46.245 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74988 00:15:46.245 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.245 [2024-07-24 19:53:57.226193] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:46.245 [2024-07-24 19:53:57.226321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74988 ] 00:15:46.245 [2024-07-24 19:53:57.368143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.245 [2024-07-24 19:53:57.497317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.245 [2024-07-24 19:53:57.554079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:46.245 Running I/O for 15 seconds... 00:15:46.245 [2024-07-24 19:54:00.109033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.245 [2024-07-24 19:54:00.109326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.245 [2024-07-24 19:54:00.109341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.109972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.109987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.246 [2024-07-24 19:54:00.110506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.246 [2024-07-24 19:54:00.110521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.110985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.110999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.247 [2024-07-24 19:54:00.111710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.247 [2024-07-24 19:54:00.111748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.111782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.111811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.111840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.111869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.111897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.111926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.111955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.111983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.111997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.248 [2024-07-24 19:54:00.112504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.248 [2024-07-24 19:54:00.112928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.248 [2024-07-24 19:54:00.112942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:00.112957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:00.112971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:00.112986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:00.112999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:00.113013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175830 is same with the state(5) to be set 00:15:46.249 [2024-07-24 19:54:00.113030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.249 [2024-07-24 19:54:00.113043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.249 [2024-07-24 19:54:00.113055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:15:46.249 [2024-07-24 19:54:00.113068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:00.113128] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1175830 was disconnected and freed. reset controller. 00:15:46.249 [2024-07-24 19:54:00.113146] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:46.249 [2024-07-24 19:54:00.113202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.249 [2024-07-24 19:54:00.113224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:00.113239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.249 [2024-07-24 19:54:00.113257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:00.113271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.249 [2024-07-24 19:54:00.113284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:00.113298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.249 [2024-07-24 19:54:00.113311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:00.113324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:46.249 [2024-07-24 19:54:00.113370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106570 (9): Bad file descriptor 00:15:46.249 [2024-07-24 19:54:00.117247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.249 [2024-07-24 19:54:00.154519] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.249 [2024-07-24 19:54:03.761678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.761763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.761819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.761837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.761854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.761868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.761883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.761896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.761911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.761924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.761939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.761953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.761968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.761981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.761996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.762010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.249 [2024-07-24 19:54:03.762482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.762510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.762538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.762574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.249 [2024-07-24 19:54:03.762590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.249 [2024-07-24 19:54:03.762604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.762632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.762660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.762688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.762717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.762984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.762998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.763027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.763055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.763083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.763112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.763140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.763169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.250 [2024-07-24 19:54:03.763204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.250 [2024-07-24 19:54:03.763572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.250 [2024-07-24 19:54:03.763587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.763600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.763629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.763657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.763685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.763715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.763789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.763822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.763851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.763880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.763909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.763937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.763966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.763981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.763994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.251 [2024-07-24 19:54:03.764612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.764641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.764670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.764698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.764728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.764774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.251 [2024-07-24 19:54:03.764803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.251 [2024-07-24 19:54:03.764818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.764831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.764846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.764860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.764875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.764888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.764903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.764916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.764940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.764954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.764969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.764982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.764997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.765011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.765039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.765067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.765095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.765124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.765152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.765181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.252 [2024-07-24 19:54:03.765211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:03.765645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175d50 is same with the state(5) to be set 00:15:46.252 [2024-07-24 19:54:03.765685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.252 [2024-07-24 19:54:03.765696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.252 [2024-07-24 19:54:03.765707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82368 len:8 PRP1 0x0 PRP2 0x0 00:15:46.252 [2024-07-24 19:54:03.765720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765792] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1175d50 was disconnected and freed. reset controller. 00:15:46.252 [2024-07-24 19:54:03.765813] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:46.252 [2024-07-24 19:54:03.765872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.252 [2024-07-24 19:54:03.765893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.252 [2024-07-24 19:54:03.765922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.252 [2024-07-24 19:54:03.765949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.252 [2024-07-24 19:54:03.765976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:03.765988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:46.252 [2024-07-24 19:54:03.769846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.252 [2024-07-24 19:54:03.769891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106570 (9): Bad file descriptor 00:15:46.252 [2024-07-24 19:54:03.810454] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.252 [2024-07-24 19:54:08.292688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:08.292755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:08.292786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.252 [2024-07-24 19:54:08.292803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.252 [2024-07-24 19:54:08.292819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.292833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.292849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.292862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.292878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.292912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.292929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.292943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.292958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.292972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.292987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.253 [2024-07-24 19:54:08.293879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.293982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.293996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.253 [2024-07-24 19:54:08.294012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.253 [2024-07-24 19:54:08.294025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.254 [2024-07-24 19:54:08.294753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.294982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.294996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.295012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.295025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.254 [2024-07-24 19:54:08.295040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.254 [2024-07-24 19:54:08.295054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:46.255 [2024-07-24 19:54:08.295655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.295969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.295984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.296005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.296022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.296045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.296063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.296076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.296092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.296105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.296121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:46.255 [2024-07-24 19:54:08.296134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.296149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1175a10 is same with the state(5) to be set 00:15:46.255 [2024-07-24 19:54:08.296174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.255 [2024-07-24 19:54:08.296185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.255 [2024-07-24 19:54:08.296196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38640 len:8 PRP1 0x0 PRP2 0x0 00:15:46.255 [2024-07-24 19:54:08.296209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.296224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.255 [2024-07-24 19:54:08.296234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.255 [2024-07-24 19:54:08.296245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39112 len:8 PRP1 0x0 PRP2 0x0 00:15:46.255 [2024-07-24 19:54:08.296258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.255 [2024-07-24 19:54:08.296271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.255 [2024-07-24 19:54:08.296281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39120 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39128 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39136 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39144 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39152 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39160 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39168 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39176 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296667] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39184 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39192 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39200 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39208 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39216 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39224 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.296959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.296972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.296982] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.296993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39232 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.297006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.297019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:46.256 [2024-07-24 19:54:08.297029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:46.256 [2024-07-24 19:54:08.297039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39240 len:8 PRP1 0x0 PRP2 0x0 00:15:46.256 [2024-07-24 19:54:08.297052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.297111] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1175a10 was disconnected and freed. reset controller. 00:15:46.256 [2024-07-24 19:54:08.297130] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:46.256 [2024-07-24 19:54:08.297186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.256 [2024-07-24 19:54:08.297208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.297223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.256 [2024-07-24 19:54:08.297236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.297250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.256 [2024-07-24 19:54:08.297284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.297310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.256 [2024-07-24 19:54:08.297327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.256 [2024-07-24 19:54:08.297341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:46.256 [2024-07-24 19:54:08.297375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1106570 (9): Bad file descriptor 00:15:46.256 [2024-07-24 19:54:08.301187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.256 [2024-07-24 19:54:08.341850] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.256 00:15:46.257 Latency(us) 00:15:46.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.257 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.257 Verification LBA range: start 0x0 length 0x4000 00:15:46.257 NVMe0n1 : 15.01 9166.60 35.81 241.26 0.00 13573.44 662.81 17873.45 00:15:46.257 =================================================================================================================== 00:15:46.257 Total : 9166.60 35.81 241.26 0.00 13573.44 662.81 17873.45 00:15:46.257 Received shutdown signal, test time was about 15.000000 seconds 00:15:46.257 00:15:46.257 Latency(us) 00:15:46.257 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.257 =================================================================================================================== 00:15:46.257 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:46.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75192 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75192 /var/tmp/bdevperf.sock 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 75192 ']' 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:46.257 19:54:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:46.824 19:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:46.824 19:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:15:46.824 19:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:47.082 [2024-07-24 19:54:15.512549] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:47.082 19:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:47.082 [2024-07-24 19:54:15.748809] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:47.341 19:54:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.599 NVMe0n1 00:15:47.599 19:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.858 00:15:47.858 19:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.116 00:15:48.116 19:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:48.116 19:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:48.374 19:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.633 19:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:51.919 19:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:51.919 19:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:51.919 19:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75269 00:15:51.919 19:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:51.919 19:54:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75269 00:15:52.884 0 00:15:52.884 19:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:52.884 [2024-07-24 19:54:14.287162] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:52.884 [2024-07-24 19:54:14.287339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75192 ] 00:15:52.884 [2024-07-24 19:54:14.420118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.884 [2024-07-24 19:54:14.533573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.884 [2024-07-24 19:54:14.586281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:52.884 [2024-07-24 19:54:17.138437] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:52.884 [2024-07-24 19:54:17.138573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.884 [2024-07-24 19:54:17.138599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.884 [2024-07-24 19:54:17.138618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.884 [2024-07-24 19:54:17.138633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.884 [2024-07-24 19:54:17.138648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.884 [2024-07-24 19:54:17.138662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.884 [2024-07-24 19:54:17.138677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:52.884 [2024-07-24 19:54:17.138690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:52.884 [2024-07-24 19:54:17.138705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:52.884 [2024-07-24 19:54:17.138770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:52.884 [2024-07-24 19:54:17.138805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2057570 (9): Bad file descriptor 00:15:52.884 [2024-07-24 19:54:17.145385] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:52.884 Running I/O for 1 seconds... 00:15:52.884 00:15:52.884 Latency(us) 00:15:52.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.885 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:52.885 Verification LBA range: start 0x0 length 0x4000 00:15:52.885 NVMe0n1 : 1.01 6989.10 27.30 0.00 0.00 18240.99 2234.18 15013.70 00:15:52.885 =================================================================================================================== 00:15:52.885 Total : 6989.10 27.30 0.00 0.00 18240.99 2234.18 15013.70 00:15:53.142 19:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:53.142 19:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.142 19:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.400 19:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:53.400 19:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:53.658 19:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.917 19:54:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75192 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 75192 ']' 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 75192 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75192 00:15:57.254 killing process with pid 75192 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75192' 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 75192 00:15:57.254 19:54:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 75192 00:15:57.513 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:57.513 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.080 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:58.080 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:58.080 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:58.080 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.080 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:58.080 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.080 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.081 rmmod nvme_tcp 00:15:58.081 rmmod nvme_fabrics 00:15:58.081 rmmod nvme_keyring 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 74936 ']' 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 74936 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 74936 ']' 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 74936 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74936 00:15:58.081 killing process with pid 74936 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74936' 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 74936 00:15:58.081 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 74936 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:58.340 ************************************ 00:15:58.340 END TEST nvmf_failover 00:15:58.340 ************************************ 00:15:58.340 00:15:58.340 real 0m32.883s 00:15:58.340 user 2m7.484s 00:15:58.340 sys 0m5.589s 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.340 ************************************ 00:15:58.340 START TEST nvmf_host_discovery 00:15:58.340 ************************************ 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:58.340 * Looking for test storage... 00:15:58.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.340 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:58.341 19:54:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:58.341 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:58.599 Cannot find device "nvmf_tgt_br" 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:58.599 Cannot find device "nvmf_tgt_br2" 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:58.599 Cannot find device "nvmf_tgt_br" 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:58.599 Cannot find device "nvmf_tgt_br2" 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:58.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:58.599 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.599 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.857 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:58.857 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:58.857 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.857 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.857 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.857 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:58.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:15:58.858 00:15:58.858 --- 10.0.0.2 ping statistics --- 00:15:58.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.858 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:58.858 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.858 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:58.858 00:15:58.858 --- 10.0.0.3 ping statistics --- 00:15:58.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.858 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:58.858 00:15:58.858 --- 10.0.0.1 ping statistics --- 00:15:58.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.858 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=75540 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 75540 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75540 ']' 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.858 19:54:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.858 [2024-07-24 19:54:27.413946] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:58.858 [2024-07-24 19:54:27.414039] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.116 [2024-07-24 19:54:27.556463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.116 [2024-07-24 19:54:27.684411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.116 [2024-07-24 19:54:27.684478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.116 [2024-07-24 19:54:27.684492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.116 [2024-07-24 19:54:27.684503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.116 [2024-07-24 19:54:27.684513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.116 [2024-07-24 19:54:27.684553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.116 [2024-07-24 19:54:27.741802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.682 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.682 [2024-07-24 19:54:28.349533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.941 [2024-07-24 19:54:28.357645] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.941 null0 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.941 null1 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.941 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75572 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75572 /tmp/host.sock 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 75572 ']' 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.941 19:54:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.941 [2024-07-24 19:54:28.439639] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:15:59.941 [2024-07-24 19:54:28.439819] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75572 ] 00:15:59.941 [2024-07-24 19:54:28.579432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.199 [2024-07-24 19:54:28.722884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.199 [2024-07-24 19:54:28.788888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.766 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.025 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.284 [2024-07-24 19:54:29.746125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:01.284 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:01.285 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.543 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:16:01.543 19:54:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:16:01.801 [2024-07-24 19:54:30.412918] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:01.801 [2024-07-24 19:54:30.412957] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:01.801 [2024-07-24 19:54:30.412975] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:01.801 [2024-07-24 19:54:30.418964] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:02.059 [2024-07-24 19:54:30.476122] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:02.059 [2024-07-24 19:54:30.476335] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.317 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.575 19:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.575 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.834 [2024-07-24 19:54:31.323527] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:02.834 [2024-07-24 19:54:31.324161] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:02.834 [2024-07-24 19:54:31.324335] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:02.834 [2024-07-24 19:54:31.330147] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.834 [2024-07-24 19:54:31.388433] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:02.834 [2024-07-24 19:54:31.388456] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:02.834 [2024-07-24 19:54:31.388463] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:02.834 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.835 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.093 [2024-07-24 19:54:31.552526] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:03.093 [2024-07-24 19:54:31.552563] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:03.093 [2024-07-24 19:54:31.553361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.093 [2024-07-24 19:54:31.553394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.093 [2024-07-24 19:54:31.553423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.093 [2024-07-24 19:54:31.553433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.093 [2024-07-24 19:54:31.553442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.093 [2024-07-24 19:54:31.553451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.093 [2024-07-24 19:54:31.553460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:03.093 [2024-07-24 19:54:31.553469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:03.093 [2024-07-24 19:54:31.553478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ca620 is same with the state(5) to be set 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:03.093 [2024-07-24 19:54:31.558574] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:03.093 [2024-07-24 19:54:31.558607] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:03.093 [2024-07-24 19:54:31.558677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ca620 (9): Bad file descriptor 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.093 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.094 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:16:03.352 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:03.353 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.353 19:54:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 [2024-07-24 19:54:32.993963] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:04.726 [2024-07-24 19:54:32.994145] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:04.726 [2024-07-24 19:54:32.994209] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:04.726 [2024-07-24 19:54:33.000014] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:04.726 [2024-07-24 19:54:33.061034] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:04.726 [2024-07-24 19:54:33.061077] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 request: 00:16:04.726 { 00:16:04.726 "name": "nvme", 00:16:04.726 "trtype": "tcp", 00:16:04.726 "traddr": "10.0.0.2", 00:16:04.726 "adrfam": "ipv4", 00:16:04.726 "trsvcid": "8009", 00:16:04.726 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:04.726 "wait_for_attach": true, 00:16:04.726 "method": "bdev_nvme_start_discovery", 00:16:04.726 "req_id": 1 00:16:04.726 } 00:16:04.726 Got JSON-RPC error response 00:16:04.726 response: 00:16:04.726 { 00:16:04.726 "code": -17, 00:16:04.726 "message": "File exists" 00:16:04.726 } 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 request: 00:16:04.726 { 00:16:04.726 "name": "nvme_second", 00:16:04.726 "trtype": "tcp", 00:16:04.726 "traddr": "10.0.0.2", 00:16:04.726 "adrfam": "ipv4", 00:16:04.726 "trsvcid": "8009", 00:16:04.726 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:04.726 "wait_for_attach": true, 00:16:04.726 "method": "bdev_nvme_start_discovery", 00:16:04.726 "req_id": 1 00:16:04.726 } 00:16:04.726 Got JSON-RPC error response 00:16:04.726 response: 00:16:04.726 { 00:16:04.726 "code": -17, 00:16:04.726 "message": "File exists" 00:16:04.726 } 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.726 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:16:04.727 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.727 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:04.727 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.727 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:04.727 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:04.727 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:04.727 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.727 19:54:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.099 [2024-07-24 19:54:34.345459] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:06.099 [2024-07-24 19:54:34.345535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x806c30 with addr=10.0.0.2, port=8010 00:16:06.099 [2024-07-24 19:54:34.345562] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:06.099 [2024-07-24 19:54:34.345573] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:06.099 [2024-07-24 19:54:34.345584] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:07.033 [2024-07-24 19:54:35.345457] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:07.033 [2024-07-24 19:54:35.345533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x806c30 with addr=10.0.0.2, port=8010 00:16:07.033 [2024-07-24 19:54:35.345574] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:07.033 [2024-07-24 19:54:35.345584] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:07.033 [2024-07-24 19:54:35.345594] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:07.968 [2024-07-24 19:54:36.345306] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:07.969 request: 00:16:07.969 { 00:16:07.969 "name": "nvme_second", 00:16:07.969 "trtype": "tcp", 00:16:07.969 "traddr": "10.0.0.2", 00:16:07.969 "adrfam": "ipv4", 00:16:07.969 "trsvcid": "8010", 00:16:07.969 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:07.969 "wait_for_attach": false, 00:16:07.969 "attach_timeout_ms": 3000, 00:16:07.969 "method": "bdev_nvme_start_discovery", 00:16:07.969 "req_id": 1 00:16:07.969 } 00:16:07.969 Got JSON-RPC error response 00:16:07.969 response: 00:16:07.969 { 00:16:07.969 "code": -110, 00:16:07.969 "message": "Connection timed out" 00:16:07.969 } 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75572 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:07.969 rmmod nvme_tcp 00:16:07.969 rmmod nvme_fabrics 00:16:07.969 rmmod nvme_keyring 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 75540 ']' 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 75540 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 75540 ']' 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 75540 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75540 00:16:07.969 killing process with pid 75540 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75540' 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 75540 00:16:07.969 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 75540 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:08.227 00:16:08.227 real 0m9.927s 00:16:08.227 user 0m19.124s 00:16:08.227 sys 0m1.953s 00:16:08.227 ************************************ 00:16:08.227 END TEST nvmf_host_discovery 00:16:08.227 ************************************ 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:08.227 ************************************ 00:16:08.227 START TEST nvmf_host_multipath_status 00:16:08.227 ************************************ 00:16:08.227 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:08.486 * Looking for test storage... 00:16:08.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:08.486 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.487 19:54:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:08.487 Cannot find device "nvmf_tgt_br" 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.487 Cannot find device "nvmf_tgt_br2" 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:08.487 Cannot find device "nvmf_tgt_br" 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:08.487 Cannot find device "nvmf_tgt_br2" 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.487 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.487 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.746 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:08.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:08.747 00:16:08.747 --- 10.0.0.2 ping statistics --- 00:16:08.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.747 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:08.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:08.747 00:16:08.747 --- 10.0.0.3 ping statistics --- 00:16:08.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.747 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:08.747 00:16:08.747 --- 10.0.0.1 ping statistics --- 00:16:08.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.747 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:08.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76027 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76027 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76027 ']' 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.747 19:54:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:08.747 [2024-07-24 19:54:37.404931] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:16:08.747 [2024-07-24 19:54:37.405286] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.005 [2024-07-24 19:54:37.547990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.265 [2024-07-24 19:54:37.703868] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.265 [2024-07-24 19:54:37.704429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.265 [2024-07-24 19:54:37.704955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.265 [2024-07-24 19:54:37.704997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.265 [2024-07-24 19:54:37.705015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.265 [2024-07-24 19:54:37.705166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.265 [2024-07-24 19:54:37.705375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.265 [2024-07-24 19:54:37.765002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:09.836 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.836 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:09.836 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.836 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:09.836 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:09.836 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.836 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76027 00:16:09.836 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:10.401 [2024-07-24 19:54:38.767774] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.401 19:54:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:10.660 Malloc0 00:16:10.660 19:54:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:10.918 19:54:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:11.176 19:54:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.434 [2024-07-24 19:54:39.945937] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.434 19:54:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:11.693 [2024-07-24 19:54:40.190078] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:11.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76083 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76083 /var/tmp/bdevperf.sock 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 76083 ']' 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.693 19:54:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:12.627 19:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.627 19:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:16:12.627 19:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:12.885 19:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:13.143 Nvme0n1 00:16:13.143 19:54:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:13.400 Nvme0n1 00:16:13.400 19:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:13.400 19:54:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:15.930 19:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:15.930 19:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:15.930 19:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:15.930 19:54:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:16.910 19:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:16.910 19:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.910 19:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.910 19:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:17.168 19:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.168 19:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:17.168 19:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.168 19:54:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:17.426 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.426 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.426 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.426 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.684 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.684 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.684 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.684 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.943 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.943 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.943 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.943 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:18.201 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.201 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:18.201 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:18.201 19:54:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.459 19:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.459 19:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:18.459 19:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:18.717 19:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:18.975 19:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:20.366 19:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:20.366 19:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:20.366 19:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.366 19:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:20.366 19:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:20.366 19:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:20.366 19:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.366 19:54:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:20.625 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.625 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:20.625 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.625 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.884 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.884 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.884 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.884 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:21.143 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.143 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:21.143 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.143 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:21.402 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.402 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:21.402 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.402 19:54:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:21.660 19:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.660 19:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:21.660 19:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:21.918 19:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:22.175 19:54:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:23.109 19:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:23.109 19:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:23.109 19:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.109 19:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.367 19:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.367 19:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.367 19:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.367 19:54:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:23.625 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.625 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:23.625 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.625 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:23.882 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.882 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:23.882 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.882 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:24.140 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.140 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:24.140 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:24.140 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.397 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.397 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:24.397 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.397 19:54:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:24.655 19:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.655 19:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:24.655 19:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:24.923 19:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:25.181 19:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:26.114 19:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:26.114 19:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:26.114 19:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.114 19:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:26.372 19:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.372 19:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:26.372 19:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.372 19:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:26.630 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:26.630 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:26.630 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:26.630 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.889 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.889 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:26.889 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:26.889 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.147 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.147 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:27.147 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.147 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:27.406 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.406 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:27.406 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.406 19:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:27.664 19:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.664 19:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:27.664 19:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:27.921 19:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:28.179 19:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:29.113 19:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:29.113 19:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:29.113 19:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.113 19:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:29.372 19:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.372 19:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:29.372 19:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.372 19:54:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:29.630 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:29.630 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:29.631 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:29.631 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.889 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.889 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:29.889 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.889 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:30.147 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.147 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:30.147 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.147 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:30.405 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.405 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:30.405 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.405 19:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:30.663 19:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:30.663 19:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:30.663 19:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:30.922 19:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:31.179 19:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:32.117 19:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:32.117 19:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:32.117 19:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:32.117 19:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.374 19:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.374 19:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:32.374 19:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.374 19:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:32.631 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.631 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:32.631 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.631 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:33.196 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.196 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:33.196 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.196 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:33.196 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.196 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:33.196 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.196 19:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:33.453 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:33.453 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:33.453 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.454 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:33.711 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.711 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:33.969 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:33.969 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:34.261 19:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:34.519 19:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:35.456 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:35.456 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:35.456 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.456 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:35.714 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.714 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:35.714 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:35.714 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.972 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:35.972 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:35.972 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.972 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:36.230 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.230 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:36.230 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.230 19:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:36.488 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.488 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:36.488 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.488 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:36.746 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.746 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:36.746 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.746 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:37.004 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.005 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:37.005 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:37.262 19:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:37.521 19:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:38.456 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:38.456 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:38.456 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.456 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:38.715 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:38.715 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:38.715 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:38.715 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.973 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.973 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:38.973 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.974 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:39.232 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.232 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:39.232 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.232 19:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:39.491 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.491 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:39.491 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:39.491 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.749 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.749 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:39.749 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.749 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.007 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.007 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:40.008 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:40.266 19:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:40.524 19:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:41.504 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:41.504 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:41.504 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.504 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:41.762 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.762 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:41.762 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:41.762 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.020 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.020 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:42.020 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:42.020 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.278 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.278 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:42.278 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.278 19:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:42.536 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.536 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:42.536 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.536 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:42.795 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.795 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:42.795 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.795 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:43.053 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:43.053 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:43.053 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:43.311 19:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:43.569 19:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:44.504 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:44.504 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:44.504 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.504 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:44.799 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.799 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:44.799 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.799 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:45.057 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:45.057 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:45.058 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.058 19:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:45.624 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.624 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:45.624 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:45.624 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.624 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.624 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:45.624 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:45.624 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.882 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.882 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:45.882 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.882 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:46.141 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:46.141 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76083 00:16:46.141 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76083 ']' 00:16:46.141 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76083 00:16:46.141 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:46.141 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.141 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76083 00:16:46.401 killing process with pid 76083 00:16:46.401 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:16:46.401 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:16:46.401 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76083' 00:16:46.401 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76083 00:16:46.401 19:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76083 00:16:46.401 Connection closed with partial response: 00:16:46.401 00:16:46.401 00:16:46.401 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76083 00:16:46.401 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:46.401 [2024-07-24 19:54:40.252209] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:16:46.401 [2024-07-24 19:54:40.252320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76083 ] 00:16:46.401 [2024-07-24 19:54:40.388969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.401 [2024-07-24 19:54:40.502072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.401 [2024-07-24 19:54:40.555155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:46.401 Running I/O for 90 seconds... 00:16:46.401 [2024-07-24 19:54:56.408211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.401 [2024-07-24 19:54:56.408290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.401 [2024-07-24 19:54:56.408373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.401 [2024-07-24 19:54:56.408415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.401 [2024-07-24 19:54:56.408454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.401 [2024-07-24 19:54:56.408493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.401 [2024-07-24 19:54:56.408532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.401 [2024-07-24 19:54:56.408571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.401 [2024-07-24 19:54:56.408610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.408648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.408687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.408764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.408807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.408846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.408884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.408923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.408962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:46.401 [2024-07-24 19:54:56.408984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.401 [2024-07-24 19:54:56.409000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.409593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.409702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.409765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.409818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.409859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.409899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.409937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.409975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.409998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.402 [2024-07-24 19:54:56.410456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.410495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.410535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.410574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:46.402 [2024-07-24 19:54:56.410597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:50696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.402 [2024-07-24 19:54:56.410614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.410652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.410692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.410731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.410796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.410845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.410885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.410930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.410970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.410992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.411372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.411412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:50752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.411456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.411496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.411534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.411573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.411613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.403 [2024-07-24 19:54:56.411652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.411983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.411999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.412022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.412039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.412062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.412078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.412105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.412135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.412160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.412177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.412199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.403 [2024-07-24 19:54:56.412215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:46.403 [2024-07-24 19:54:56.412237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.412254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.412294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.412332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.412381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.412421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.412460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.412500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.412538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:54:56.412578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:54:56.412617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:54:56.412656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:54:56.412695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:54:56.412734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:54:56.412787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.412811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:54:56.412827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.413587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:54:56.413616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.413653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.413672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.413702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.413719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.413765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:51368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.413786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.413817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.413835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.413866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:51384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.413883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.413915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.413931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.413962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.413979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:51416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:51432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:54:56.414540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:54:56.414557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:55:12.133932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.404 [2024-07-24 19:55:12.134011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:55:12.134090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:55:12.134112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:55:12.134136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:55:12.134153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:55:12.134176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:55:12.134192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:55:12.134214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.404 [2024-07-24 19:55:12.134230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:46.404 [2024-07-24 19:55:12.134252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.134494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.134531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.134569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.134607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.134645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.134683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.134969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.134984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.135023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.135302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.135341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.405 [2024-07-24 19:55:12.135379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.405 [2024-07-24 19:55:12.135498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:46.405 [2024-07-24 19:55:12.135520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.135537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.135575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.135614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.135652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.135690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.135729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.135792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.135832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.135873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.135911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.135950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.135973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.135989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.136011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.136027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.136051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.136068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.137475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.137514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.137553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.137591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.137630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:46.406 [2024-07-24 19:55:12.137669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.137977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.137995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:46.406 [2024-07-24 19:55:12.138017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:46.406 [2024-07-24 19:55:12.138033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:46.406 Received shutdown signal, test time was about 32.713677 seconds 00:16:46.406 00:16:46.406 Latency(us) 00:16:46.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.406 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:46.406 Verification LBA range: start 0x0 length 0x4000 00:16:46.406 Nvme0n1 : 32.71 9000.74 35.16 0.00 0.00 14188.79 143.36 4026531.84 00:16:46.406 =================================================================================================================== 00:16:46.406 Total : 9000.74 35.16 0.00 0.00 14188.79 143.36 4026531.84 00:16:46.406 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.664 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:46.664 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:46.664 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:46.664 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.664 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.923 rmmod nvme_tcp 00:16:46.923 rmmod nvme_fabrics 00:16:46.923 rmmod nvme_keyring 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76027 ']' 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76027 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 76027 ']' 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 76027 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76027 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.923 killing process with pid 76027 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76027' 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 76027 00:16:46.923 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 76027 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:47.182 ************************************ 00:16:47.182 END TEST nvmf_host_multipath_status 00:16:47.182 ************************************ 00:16:47.182 00:16:47.182 real 0m38.824s 00:16:47.182 user 2m4.779s 00:16:47.182 sys 0m11.496s 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.182 ************************************ 00:16:47.182 START TEST nvmf_discovery_remove_ifc 00:16:47.182 ************************************ 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:47.182 * Looking for test storage... 00:16:47.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.182 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.183 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:47.443 Cannot find device "nvmf_tgt_br" 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.443 Cannot find device "nvmf_tgt_br2" 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:47.443 Cannot find device "nvmf_tgt_br" 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:47.443 Cannot find device "nvmf_tgt_br2" 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.443 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:47.443 19:55:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:47.444 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:47.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:16:47.702 00:16:47.702 --- 10.0.0.2 ping statistics --- 00:16:47.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.702 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:47.702 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:47.702 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:16:47.702 00:16:47.702 --- 10.0.0.3 ping statistics --- 00:16:47.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.702 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:47.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:16:47.702 00:16:47.702 --- 10.0.0.1 ping statistics --- 00:16:47.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.702 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=76858 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 76858 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76858 ']' 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:47.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:47.702 19:55:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.702 [2024-07-24 19:55:16.244774] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:16:47.702 [2024-07-24 19:55:16.244893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.960 [2024-07-24 19:55:16.387732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.960 [2024-07-24 19:55:16.545049] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.960 [2024-07-24 19:55:16.545129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.960 [2024-07-24 19:55:16.545153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.960 [2024-07-24 19:55:16.545164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.960 [2024-07-24 19:55:16.545174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.960 [2024-07-24 19:55:16.545207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.960 [2024-07-24 19:55:16.601658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:48.526 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.526 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:48.526 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:48.526 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:48.526 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.784 [2024-07-24 19:55:17.236719] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.784 [2024-07-24 19:55:17.244857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:48.784 null0 00:16:48.784 [2024-07-24 19:55:17.276801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=76890 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 76890 /tmp/host.sock 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 76890 ']' 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:48.784 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.784 19:55:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.784 [2024-07-24 19:55:17.348127] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:16:48.784 [2024-07-24 19:55:17.348234] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76890 ] 00:16:49.042 [2024-07-24 19:55:17.483885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.042 [2024-07-24 19:55:17.609405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.975 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:49.975 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:16:49.975 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.975 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:49.975 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.975 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.975 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.975 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:49.976 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.976 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.976 [2024-07-24 19:55:18.371634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:49.976 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.976 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:49.976 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.976 19:55:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.910 [2024-07-24 19:55:19.429856] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:50.910 [2024-07-24 19:55:19.429897] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:50.910 [2024-07-24 19:55:19.429917] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:50.910 [2024-07-24 19:55:19.435905] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:50.910 [2024-07-24 19:55:19.493238] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:50.910 [2024-07-24 19:55:19.493315] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:50.910 [2024-07-24 19:55:19.493345] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:50.910 [2024-07-24 19:55:19.493363] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:50.911 [2024-07-24 19:55:19.493391] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.911 [2024-07-24 19:55:19.498474] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2032ef0 was disconnected and freed. delete nvme_qpair. 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.911 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.169 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.169 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.169 19:55:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:52.104 19:55:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.038 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.038 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.038 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.038 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.038 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.038 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.038 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.296 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.296 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:53.296 19:55:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:54.248 19:55:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:55.183 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.183 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.183 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.183 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.183 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.183 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.183 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.183 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.441 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:55.441 19:55:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:56.375 19:55:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:56.375 [2024-07-24 19:55:24.921472] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:56.375 [2024-07-24 19:55:24.921548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.375 [2024-07-24 19:55:24.921564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.375 [2024-07-24 19:55:24.921578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.375 [2024-07-24 19:55:24.921588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.375 [2024-07-24 19:55:24.921598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.375 [2024-07-24 19:55:24.921608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.375 [2024-07-24 19:55:24.921619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.375 [2024-07-24 19:55:24.921628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.375 [2024-07-24 19:55:24.921638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:56.375 [2024-07-24 19:55:24.921648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:56.375 [2024-07-24 19:55:24.921658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f98ac0 is same with the state(5) to be set 00:16:56.375 [2024-07-24 19:55:24.931467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f98ac0 (9): Bad file descriptor 00:16:56.375 [2024-07-24 19:55:24.941494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:57.310 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.310 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.310 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.310 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.310 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.310 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.310 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.310 [2024-07-24 19:55:25.957857] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:57.310 [2024-07-24 19:55:25.957961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f98ac0 with addr=10.0.0.2, port=4420 00:16:57.310 [2024-07-24 19:55:25.957991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f98ac0 is same with the state(5) to be set 00:16:57.310 [2024-07-24 19:55:25.958048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f98ac0 (9): Bad file descriptor 00:16:57.310 [2024-07-24 19:55:25.958902] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:57.310 [2024-07-24 19:55:25.958973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:57.310 [2024-07-24 19:55:25.958996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:57.310 [2024-07-24 19:55:25.959019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:57.310 [2024-07-24 19:55:25.959062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:57.310 [2024-07-24 19:55:25.959084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:57.310 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.568 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.568 19:55:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:58.503 [2024-07-24 19:55:26.959160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:58.503 [2024-07-24 19:55:26.959208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:58.503 [2024-07-24 19:55:26.959220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:58.503 [2024-07-24 19:55:26.959232] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:58.503 [2024-07-24 19:55:26.959256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:58.503 [2024-07-24 19:55:26.959285] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:58.503 [2024-07-24 19:55:26.959332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.503 [2024-07-24 19:55:26.959349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.503 [2024-07-24 19:55:26.959363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.503 [2024-07-24 19:55:26.959372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.503 [2024-07-24 19:55:26.959383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.503 [2024-07-24 19:55:26.959393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.503 [2024-07-24 19:55:26.959403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.503 [2024-07-24 19:55:26.959413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.503 [2024-07-24 19:55:26.959423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:58.503 [2024-07-24 19:55:26.959432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:58.503 [2024-07-24 19:55:26.959442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:58.503 [2024-07-24 19:55:26.959768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9c860 (9): Bad file descriptor 00:16:58.503 [2024-07-24 19:55:26.960781] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:58.503 [2024-07-24 19:55:26.960803] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:58.503 19:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.503 19:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.503 19:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.503 19:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.503 19:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.503 19:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.503 19:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.503 19:55:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:58.503 19:55:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:59.910 19:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:00.484 [2024-07-24 19:55:28.965674] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:00.484 [2024-07-24 19:55:28.965714] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:00.484 [2024-07-24 19:55:28.965732] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:00.484 [2024-07-24 19:55:28.971716] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:00.484 [2024-07-24 19:55:29.028083] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:00.484 [2024-07-24 19:55:29.028138] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:00.484 [2024-07-24 19:55:29.028162] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:00.484 [2024-07-24 19:55:29.028194] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:00.484 [2024-07-24 19:55:29.028205] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:00.484 [2024-07-24 19:55:29.034321] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2010460 was disconnected and freed. delete nvme_qpair. 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 76890 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76890 ']' 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76890 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76890 00:17:00.743 killing process with pid 76890 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76890' 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76890 00:17:00.743 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76890 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.001 rmmod nvme_tcp 00:17:01.001 rmmod nvme_fabrics 00:17:01.001 rmmod nvme_keyring 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:01.001 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 76858 ']' 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 76858 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 76858 ']' 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 76858 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76858 00:17:01.002 killing process with pid 76858 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76858' 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 76858 00:17:01.002 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 76858 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:01.260 00:17:01.260 real 0m14.088s 00:17:01.260 user 0m24.436s 00:17:01.260 sys 0m2.417s 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.260 ************************************ 00:17:01.260 END TEST nvmf_discovery_remove_ifc 00:17:01.260 ************************************ 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.260 ************************************ 00:17:01.260 START TEST nvmf_identify_kernel_target 00:17:01.260 ************************************ 00:17:01.260 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:01.519 * Looking for test storage... 00:17:01.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.519 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.520 19:55:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:01.520 Cannot find device "nvmf_tgt_br" 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:01.520 Cannot find device "nvmf_tgt_br2" 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:01.520 Cannot find device "nvmf_tgt_br" 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:01.520 Cannot find device "nvmf_tgt_br2" 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:01.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:01.520 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:01.520 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:01.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.780 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:17:01.780 00:17:01.780 --- 10.0.0.2 ping statistics --- 00:17:01.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.780 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:01.780 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:01.780 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:17:01.780 00:17:01.780 --- 10.0.0.3 ping statistics --- 00:17:01.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.780 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:01.780 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.780 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:01.780 00:17:01.780 --- 10.0.0.1 ping statistics --- 00:17:01.780 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.780 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:01.780 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:01.781 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:01.781 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:01.781 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:02.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.298 Waiting for block devices as requested 00:17:02.298 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.298 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:02.298 19:55:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:02.557 No valid GPT data, bailing 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:02.557 No valid GPT data, bailing 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:02.557 No valid GPT data, bailing 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:02.557 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:02.816 No valid GPT data, bailing 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid=69cdc0e8-4c23-4318-834b-1d87efff05de -a 10.0.0.1 -t tcp -s 4420 00:17:02.816 00:17:02.816 Discovery Log Number of Records 2, Generation counter 2 00:17:02.816 =====Discovery Log Entry 0====== 00:17:02.816 trtype: tcp 00:17:02.816 adrfam: ipv4 00:17:02.816 subtype: current discovery subsystem 00:17:02.816 treq: not specified, sq flow control disable supported 00:17:02.816 portid: 1 00:17:02.816 trsvcid: 4420 00:17:02.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:02.816 traddr: 10.0.0.1 00:17:02.816 eflags: none 00:17:02.816 sectype: none 00:17:02.816 =====Discovery Log Entry 1====== 00:17:02.816 trtype: tcp 00:17:02.816 adrfam: ipv4 00:17:02.816 subtype: nvme subsystem 00:17:02.816 treq: not specified, sq flow control disable supported 00:17:02.816 portid: 1 00:17:02.816 trsvcid: 4420 00:17:02.816 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:02.816 traddr: 10.0.0.1 00:17:02.816 eflags: none 00:17:02.816 sectype: none 00:17:02.816 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:02.816 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:02.816 ===================================================== 00:17:02.816 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:02.816 ===================================================== 00:17:02.816 Controller Capabilities/Features 00:17:02.816 ================================ 00:17:02.816 Vendor ID: 0000 00:17:02.816 Subsystem Vendor ID: 0000 00:17:02.816 Serial Number: 326db51cecbb97100ec6 00:17:02.816 Model Number: Linux 00:17:02.816 Firmware Version: 6.7.0-68 00:17:02.816 Recommended Arb Burst: 0 00:17:02.816 IEEE OUI Identifier: 00 00 00 00:17:02.816 Multi-path I/O 00:17:02.816 May have multiple subsystem ports: No 00:17:02.816 May have multiple controllers: No 00:17:02.816 Associated with SR-IOV VF: No 00:17:02.816 Max Data Transfer Size: Unlimited 00:17:02.816 Max Number of Namespaces: 0 00:17:02.816 Max Number of I/O Queues: 1024 00:17:02.816 NVMe Specification Version (VS): 1.3 00:17:02.816 NVMe Specification Version (Identify): 1.3 00:17:02.816 Maximum Queue Entries: 1024 00:17:02.816 Contiguous Queues Required: No 00:17:02.816 Arbitration Mechanisms Supported 00:17:02.816 Weighted Round Robin: Not Supported 00:17:02.816 Vendor Specific: Not Supported 00:17:02.816 Reset Timeout: 7500 ms 00:17:02.816 Doorbell Stride: 4 bytes 00:17:02.816 NVM Subsystem Reset: Not Supported 00:17:02.816 Command Sets Supported 00:17:02.816 NVM Command Set: Supported 00:17:02.816 Boot Partition: Not Supported 00:17:02.816 Memory Page Size Minimum: 4096 bytes 00:17:02.816 Memory Page Size Maximum: 4096 bytes 00:17:02.816 Persistent Memory Region: Not Supported 00:17:02.816 Optional Asynchronous Events Supported 00:17:02.816 Namespace Attribute Notices: Not Supported 00:17:02.816 Firmware Activation Notices: Not Supported 00:17:02.816 ANA Change Notices: Not Supported 00:17:02.816 PLE Aggregate Log Change Notices: Not Supported 00:17:02.816 LBA Status Info Alert Notices: Not Supported 00:17:02.816 EGE Aggregate Log Change Notices: Not Supported 00:17:02.816 Normal NVM Subsystem Shutdown event: Not Supported 00:17:02.816 Zone Descriptor Change Notices: Not Supported 00:17:02.816 Discovery Log Change Notices: Supported 00:17:02.816 Controller Attributes 00:17:02.816 128-bit Host Identifier: Not Supported 00:17:02.816 Non-Operational Permissive Mode: Not Supported 00:17:02.816 NVM Sets: Not Supported 00:17:02.816 Read Recovery Levels: Not Supported 00:17:02.816 Endurance Groups: Not Supported 00:17:02.816 Predictable Latency Mode: Not Supported 00:17:02.816 Traffic Based Keep ALive: Not Supported 00:17:02.816 Namespace Granularity: Not Supported 00:17:02.816 SQ Associations: Not Supported 00:17:02.816 UUID List: Not Supported 00:17:02.816 Multi-Domain Subsystem: Not Supported 00:17:02.816 Fixed Capacity Management: Not Supported 00:17:02.816 Variable Capacity Management: Not Supported 00:17:02.816 Delete Endurance Group: Not Supported 00:17:02.816 Delete NVM Set: Not Supported 00:17:02.816 Extended LBA Formats Supported: Not Supported 00:17:02.816 Flexible Data Placement Supported: Not Supported 00:17:02.816 00:17:02.816 Controller Memory Buffer Support 00:17:02.816 ================================ 00:17:02.816 Supported: No 00:17:02.816 00:17:02.816 Persistent Memory Region Support 00:17:02.816 ================================ 00:17:02.816 Supported: No 00:17:02.816 00:17:02.816 Admin Command Set Attributes 00:17:02.816 ============================ 00:17:02.817 Security Send/Receive: Not Supported 00:17:02.817 Format NVM: Not Supported 00:17:02.817 Firmware Activate/Download: Not Supported 00:17:02.817 Namespace Management: Not Supported 00:17:02.817 Device Self-Test: Not Supported 00:17:02.817 Directives: Not Supported 00:17:02.817 NVMe-MI: Not Supported 00:17:02.817 Virtualization Management: Not Supported 00:17:02.817 Doorbell Buffer Config: Not Supported 00:17:02.817 Get LBA Status Capability: Not Supported 00:17:02.817 Command & Feature Lockdown Capability: Not Supported 00:17:02.817 Abort Command Limit: 1 00:17:02.817 Async Event Request Limit: 1 00:17:02.817 Number of Firmware Slots: N/A 00:17:02.817 Firmware Slot 1 Read-Only: N/A 00:17:02.817 Firmware Activation Without Reset: N/A 00:17:02.817 Multiple Update Detection Support: N/A 00:17:02.817 Firmware Update Granularity: No Information Provided 00:17:02.817 Per-Namespace SMART Log: No 00:17:02.817 Asymmetric Namespace Access Log Page: Not Supported 00:17:02.817 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:02.817 Command Effects Log Page: Not Supported 00:17:02.817 Get Log Page Extended Data: Supported 00:17:02.817 Telemetry Log Pages: Not Supported 00:17:02.817 Persistent Event Log Pages: Not Supported 00:17:02.817 Supported Log Pages Log Page: May Support 00:17:02.817 Commands Supported & Effects Log Page: Not Supported 00:17:02.817 Feature Identifiers & Effects Log Page:May Support 00:17:02.817 NVMe-MI Commands & Effects Log Page: May Support 00:17:02.817 Data Area 4 for Telemetry Log: Not Supported 00:17:02.817 Error Log Page Entries Supported: 1 00:17:02.817 Keep Alive: Not Supported 00:17:02.817 00:17:02.817 NVM Command Set Attributes 00:17:02.817 ========================== 00:17:02.817 Submission Queue Entry Size 00:17:02.817 Max: 1 00:17:02.817 Min: 1 00:17:02.817 Completion Queue Entry Size 00:17:02.817 Max: 1 00:17:02.817 Min: 1 00:17:02.817 Number of Namespaces: 0 00:17:02.817 Compare Command: Not Supported 00:17:02.817 Write Uncorrectable Command: Not Supported 00:17:02.817 Dataset Management Command: Not Supported 00:17:02.817 Write Zeroes Command: Not Supported 00:17:02.817 Set Features Save Field: Not Supported 00:17:02.817 Reservations: Not Supported 00:17:02.817 Timestamp: Not Supported 00:17:02.817 Copy: Not Supported 00:17:02.817 Volatile Write Cache: Not Present 00:17:02.817 Atomic Write Unit (Normal): 1 00:17:02.817 Atomic Write Unit (PFail): 1 00:17:02.817 Atomic Compare & Write Unit: 1 00:17:02.817 Fused Compare & Write: Not Supported 00:17:02.817 Scatter-Gather List 00:17:02.817 SGL Command Set: Supported 00:17:02.817 SGL Keyed: Not Supported 00:17:02.817 SGL Bit Bucket Descriptor: Not Supported 00:17:02.817 SGL Metadata Pointer: Not Supported 00:17:02.817 Oversized SGL: Not Supported 00:17:02.817 SGL Metadata Address: Not Supported 00:17:02.817 SGL Offset: Supported 00:17:02.817 Transport SGL Data Block: Not Supported 00:17:02.817 Replay Protected Memory Block: Not Supported 00:17:02.817 00:17:02.817 Firmware Slot Information 00:17:02.817 ========================= 00:17:02.817 Active slot: 0 00:17:02.817 00:17:02.817 00:17:02.817 Error Log 00:17:02.817 ========= 00:17:02.817 00:17:02.817 Active Namespaces 00:17:02.817 ================= 00:17:02.817 Discovery Log Page 00:17:02.817 ================== 00:17:02.817 Generation Counter: 2 00:17:02.817 Number of Records: 2 00:17:02.817 Record Format: 0 00:17:02.817 00:17:02.817 Discovery Log Entry 0 00:17:02.817 ---------------------- 00:17:02.817 Transport Type: 3 (TCP) 00:17:02.817 Address Family: 1 (IPv4) 00:17:02.817 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:02.817 Entry Flags: 00:17:02.817 Duplicate Returned Information: 0 00:17:02.817 Explicit Persistent Connection Support for Discovery: 0 00:17:02.817 Transport Requirements: 00:17:02.817 Secure Channel: Not Specified 00:17:02.817 Port ID: 1 (0x0001) 00:17:02.817 Controller ID: 65535 (0xffff) 00:17:02.817 Admin Max SQ Size: 32 00:17:02.817 Transport Service Identifier: 4420 00:17:02.817 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:02.817 Transport Address: 10.0.0.1 00:17:02.817 Discovery Log Entry 1 00:17:02.817 ---------------------- 00:17:02.817 Transport Type: 3 (TCP) 00:17:02.817 Address Family: 1 (IPv4) 00:17:02.817 Subsystem Type: 2 (NVM Subsystem) 00:17:02.817 Entry Flags: 00:17:02.817 Duplicate Returned Information: 0 00:17:02.817 Explicit Persistent Connection Support for Discovery: 0 00:17:02.817 Transport Requirements: 00:17:02.817 Secure Channel: Not Specified 00:17:02.817 Port ID: 1 (0x0001) 00:17:02.817 Controller ID: 65535 (0xffff) 00:17:02.817 Admin Max SQ Size: 32 00:17:02.817 Transport Service Identifier: 4420 00:17:02.817 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:02.817 Transport Address: 10.0.0.1 00:17:02.817 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:03.082 get_feature(0x01) failed 00:17:03.082 get_feature(0x02) failed 00:17:03.082 get_feature(0x04) failed 00:17:03.082 ===================================================== 00:17:03.082 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:03.082 ===================================================== 00:17:03.082 Controller Capabilities/Features 00:17:03.082 ================================ 00:17:03.082 Vendor ID: 0000 00:17:03.082 Subsystem Vendor ID: 0000 00:17:03.082 Serial Number: 70553d245082960c5442 00:17:03.082 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:03.082 Firmware Version: 6.7.0-68 00:17:03.082 Recommended Arb Burst: 6 00:17:03.082 IEEE OUI Identifier: 00 00 00 00:17:03.082 Multi-path I/O 00:17:03.082 May have multiple subsystem ports: Yes 00:17:03.082 May have multiple controllers: Yes 00:17:03.082 Associated with SR-IOV VF: No 00:17:03.082 Max Data Transfer Size: Unlimited 00:17:03.082 Max Number of Namespaces: 1024 00:17:03.082 Max Number of I/O Queues: 128 00:17:03.082 NVMe Specification Version (VS): 1.3 00:17:03.082 NVMe Specification Version (Identify): 1.3 00:17:03.082 Maximum Queue Entries: 1024 00:17:03.082 Contiguous Queues Required: No 00:17:03.082 Arbitration Mechanisms Supported 00:17:03.083 Weighted Round Robin: Not Supported 00:17:03.083 Vendor Specific: Not Supported 00:17:03.083 Reset Timeout: 7500 ms 00:17:03.083 Doorbell Stride: 4 bytes 00:17:03.083 NVM Subsystem Reset: Not Supported 00:17:03.083 Command Sets Supported 00:17:03.083 NVM Command Set: Supported 00:17:03.083 Boot Partition: Not Supported 00:17:03.083 Memory Page Size Minimum: 4096 bytes 00:17:03.083 Memory Page Size Maximum: 4096 bytes 00:17:03.083 Persistent Memory Region: Not Supported 00:17:03.083 Optional Asynchronous Events Supported 00:17:03.083 Namespace Attribute Notices: Supported 00:17:03.083 Firmware Activation Notices: Not Supported 00:17:03.083 ANA Change Notices: Supported 00:17:03.083 PLE Aggregate Log Change Notices: Not Supported 00:17:03.083 LBA Status Info Alert Notices: Not Supported 00:17:03.083 EGE Aggregate Log Change Notices: Not Supported 00:17:03.083 Normal NVM Subsystem Shutdown event: Not Supported 00:17:03.083 Zone Descriptor Change Notices: Not Supported 00:17:03.083 Discovery Log Change Notices: Not Supported 00:17:03.083 Controller Attributes 00:17:03.083 128-bit Host Identifier: Supported 00:17:03.083 Non-Operational Permissive Mode: Not Supported 00:17:03.083 NVM Sets: Not Supported 00:17:03.083 Read Recovery Levels: Not Supported 00:17:03.083 Endurance Groups: Not Supported 00:17:03.083 Predictable Latency Mode: Not Supported 00:17:03.083 Traffic Based Keep ALive: Supported 00:17:03.083 Namespace Granularity: Not Supported 00:17:03.083 SQ Associations: Not Supported 00:17:03.083 UUID List: Not Supported 00:17:03.083 Multi-Domain Subsystem: Not Supported 00:17:03.083 Fixed Capacity Management: Not Supported 00:17:03.083 Variable Capacity Management: Not Supported 00:17:03.083 Delete Endurance Group: Not Supported 00:17:03.083 Delete NVM Set: Not Supported 00:17:03.083 Extended LBA Formats Supported: Not Supported 00:17:03.083 Flexible Data Placement Supported: Not Supported 00:17:03.083 00:17:03.083 Controller Memory Buffer Support 00:17:03.083 ================================ 00:17:03.083 Supported: No 00:17:03.083 00:17:03.083 Persistent Memory Region Support 00:17:03.083 ================================ 00:17:03.083 Supported: No 00:17:03.083 00:17:03.083 Admin Command Set Attributes 00:17:03.083 ============================ 00:17:03.083 Security Send/Receive: Not Supported 00:17:03.083 Format NVM: Not Supported 00:17:03.083 Firmware Activate/Download: Not Supported 00:17:03.083 Namespace Management: Not Supported 00:17:03.083 Device Self-Test: Not Supported 00:17:03.083 Directives: Not Supported 00:17:03.083 NVMe-MI: Not Supported 00:17:03.083 Virtualization Management: Not Supported 00:17:03.083 Doorbell Buffer Config: Not Supported 00:17:03.083 Get LBA Status Capability: Not Supported 00:17:03.083 Command & Feature Lockdown Capability: Not Supported 00:17:03.083 Abort Command Limit: 4 00:17:03.083 Async Event Request Limit: 4 00:17:03.083 Number of Firmware Slots: N/A 00:17:03.083 Firmware Slot 1 Read-Only: N/A 00:17:03.083 Firmware Activation Without Reset: N/A 00:17:03.083 Multiple Update Detection Support: N/A 00:17:03.083 Firmware Update Granularity: No Information Provided 00:17:03.083 Per-Namespace SMART Log: Yes 00:17:03.083 Asymmetric Namespace Access Log Page: Supported 00:17:03.083 ANA Transition Time : 10 sec 00:17:03.083 00:17:03.083 Asymmetric Namespace Access Capabilities 00:17:03.083 ANA Optimized State : Supported 00:17:03.083 ANA Non-Optimized State : Supported 00:17:03.083 ANA Inaccessible State : Supported 00:17:03.083 ANA Persistent Loss State : Supported 00:17:03.083 ANA Change State : Supported 00:17:03.083 ANAGRPID is not changed : No 00:17:03.083 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:03.083 00:17:03.083 ANA Group Identifier Maximum : 128 00:17:03.083 Number of ANA Group Identifiers : 128 00:17:03.083 Max Number of Allowed Namespaces : 1024 00:17:03.083 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:03.083 Command Effects Log Page: Supported 00:17:03.083 Get Log Page Extended Data: Supported 00:17:03.083 Telemetry Log Pages: Not Supported 00:17:03.083 Persistent Event Log Pages: Not Supported 00:17:03.083 Supported Log Pages Log Page: May Support 00:17:03.083 Commands Supported & Effects Log Page: Not Supported 00:17:03.083 Feature Identifiers & Effects Log Page:May Support 00:17:03.083 NVMe-MI Commands & Effects Log Page: May Support 00:17:03.083 Data Area 4 for Telemetry Log: Not Supported 00:17:03.083 Error Log Page Entries Supported: 128 00:17:03.083 Keep Alive: Supported 00:17:03.083 Keep Alive Granularity: 1000 ms 00:17:03.083 00:17:03.083 NVM Command Set Attributes 00:17:03.083 ========================== 00:17:03.083 Submission Queue Entry Size 00:17:03.083 Max: 64 00:17:03.083 Min: 64 00:17:03.083 Completion Queue Entry Size 00:17:03.083 Max: 16 00:17:03.083 Min: 16 00:17:03.083 Number of Namespaces: 1024 00:17:03.083 Compare Command: Not Supported 00:17:03.083 Write Uncorrectable Command: Not Supported 00:17:03.083 Dataset Management Command: Supported 00:17:03.083 Write Zeroes Command: Supported 00:17:03.083 Set Features Save Field: Not Supported 00:17:03.083 Reservations: Not Supported 00:17:03.083 Timestamp: Not Supported 00:17:03.083 Copy: Not Supported 00:17:03.083 Volatile Write Cache: Present 00:17:03.083 Atomic Write Unit (Normal): 1 00:17:03.083 Atomic Write Unit (PFail): 1 00:17:03.083 Atomic Compare & Write Unit: 1 00:17:03.083 Fused Compare & Write: Not Supported 00:17:03.083 Scatter-Gather List 00:17:03.083 SGL Command Set: Supported 00:17:03.083 SGL Keyed: Not Supported 00:17:03.083 SGL Bit Bucket Descriptor: Not Supported 00:17:03.083 SGL Metadata Pointer: Not Supported 00:17:03.083 Oversized SGL: Not Supported 00:17:03.083 SGL Metadata Address: Not Supported 00:17:03.083 SGL Offset: Supported 00:17:03.083 Transport SGL Data Block: Not Supported 00:17:03.083 Replay Protected Memory Block: Not Supported 00:17:03.083 00:17:03.083 Firmware Slot Information 00:17:03.083 ========================= 00:17:03.083 Active slot: 0 00:17:03.083 00:17:03.083 Asymmetric Namespace Access 00:17:03.083 =========================== 00:17:03.083 Change Count : 0 00:17:03.083 Number of ANA Group Descriptors : 1 00:17:03.083 ANA Group Descriptor : 0 00:17:03.083 ANA Group ID : 1 00:17:03.083 Number of NSID Values : 1 00:17:03.083 Change Count : 0 00:17:03.083 ANA State : 1 00:17:03.083 Namespace Identifier : 1 00:17:03.083 00:17:03.083 Commands Supported and Effects 00:17:03.083 ============================== 00:17:03.083 Admin Commands 00:17:03.083 -------------- 00:17:03.083 Get Log Page (02h): Supported 00:17:03.083 Identify (06h): Supported 00:17:03.083 Abort (08h): Supported 00:17:03.083 Set Features (09h): Supported 00:17:03.083 Get Features (0Ah): Supported 00:17:03.083 Asynchronous Event Request (0Ch): Supported 00:17:03.083 Keep Alive (18h): Supported 00:17:03.083 I/O Commands 00:17:03.083 ------------ 00:17:03.083 Flush (00h): Supported 00:17:03.083 Write (01h): Supported LBA-Change 00:17:03.083 Read (02h): Supported 00:17:03.083 Write Zeroes (08h): Supported LBA-Change 00:17:03.083 Dataset Management (09h): Supported 00:17:03.083 00:17:03.083 Error Log 00:17:03.083 ========= 00:17:03.083 Entry: 0 00:17:03.083 Error Count: 0x3 00:17:03.083 Submission Queue Id: 0x0 00:17:03.083 Command Id: 0x5 00:17:03.083 Phase Bit: 0 00:17:03.083 Status Code: 0x2 00:17:03.083 Status Code Type: 0x0 00:17:03.083 Do Not Retry: 1 00:17:03.083 Error Location: 0x28 00:17:03.083 LBA: 0x0 00:17:03.083 Namespace: 0x0 00:17:03.083 Vendor Log Page: 0x0 00:17:03.083 ----------- 00:17:03.083 Entry: 1 00:17:03.083 Error Count: 0x2 00:17:03.083 Submission Queue Id: 0x0 00:17:03.083 Command Id: 0x5 00:17:03.083 Phase Bit: 0 00:17:03.083 Status Code: 0x2 00:17:03.083 Status Code Type: 0x0 00:17:03.083 Do Not Retry: 1 00:17:03.083 Error Location: 0x28 00:17:03.083 LBA: 0x0 00:17:03.083 Namespace: 0x0 00:17:03.084 Vendor Log Page: 0x0 00:17:03.084 ----------- 00:17:03.084 Entry: 2 00:17:03.084 Error Count: 0x1 00:17:03.084 Submission Queue Id: 0x0 00:17:03.084 Command Id: 0x4 00:17:03.084 Phase Bit: 0 00:17:03.084 Status Code: 0x2 00:17:03.084 Status Code Type: 0x0 00:17:03.084 Do Not Retry: 1 00:17:03.084 Error Location: 0x28 00:17:03.084 LBA: 0x0 00:17:03.084 Namespace: 0x0 00:17:03.084 Vendor Log Page: 0x0 00:17:03.084 00:17:03.084 Number of Queues 00:17:03.084 ================ 00:17:03.084 Number of I/O Submission Queues: 128 00:17:03.084 Number of I/O Completion Queues: 128 00:17:03.084 00:17:03.084 ZNS Specific Controller Data 00:17:03.084 ============================ 00:17:03.084 Zone Append Size Limit: 0 00:17:03.084 00:17:03.084 00:17:03.084 Active Namespaces 00:17:03.084 ================= 00:17:03.084 get_feature(0x05) failed 00:17:03.084 Namespace ID:1 00:17:03.084 Command Set Identifier: NVM (00h) 00:17:03.084 Deallocate: Supported 00:17:03.084 Deallocated/Unwritten Error: Not Supported 00:17:03.084 Deallocated Read Value: Unknown 00:17:03.084 Deallocate in Write Zeroes: Not Supported 00:17:03.084 Deallocated Guard Field: 0xFFFF 00:17:03.084 Flush: Supported 00:17:03.084 Reservation: Not Supported 00:17:03.084 Namespace Sharing Capabilities: Multiple Controllers 00:17:03.084 Size (in LBAs): 1310720 (5GiB) 00:17:03.084 Capacity (in LBAs): 1310720 (5GiB) 00:17:03.084 Utilization (in LBAs): 1310720 (5GiB) 00:17:03.084 UUID: 2e1dffc8-88d4-43d7-9913-c7ca4b4e1169 00:17:03.084 Thin Provisioning: Not Supported 00:17:03.084 Per-NS Atomic Units: Yes 00:17:03.084 Atomic Boundary Size (Normal): 0 00:17:03.084 Atomic Boundary Size (PFail): 0 00:17:03.084 Atomic Boundary Offset: 0 00:17:03.084 NGUID/EUI64 Never Reused: No 00:17:03.084 ANA group ID: 1 00:17:03.084 Namespace Write Protected: No 00:17:03.084 Number of LBA Formats: 1 00:17:03.084 Current LBA Format: LBA Format #00 00:17:03.084 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:03.084 00:17:03.084 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:03.084 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:03.084 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:03.084 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.084 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:03.084 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.084 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.084 rmmod nvme_tcp 00:17:03.084 rmmod nvme_fabrics 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:03.360 19:55:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:03.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:04.185 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.185 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:04.185 ************************************ 00:17:04.185 END TEST nvmf_identify_kernel_target 00:17:04.185 ************************************ 00:17:04.185 00:17:04.185 real 0m2.836s 00:17:04.185 user 0m0.968s 00:17:04.185 sys 0m1.352s 00:17:04.185 19:55:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.185 19:55:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.185 19:55:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:04.185 19:55:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:04.185 19:55:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:04.185 19:55:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.185 ************************************ 00:17:04.185 START TEST nvmf_auth_host 00:17:04.185 ************************************ 00:17:04.185 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:04.444 * Looking for test storage... 00:17:04.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:04.444 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:04.444 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:04.444 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.444 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.444 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.444 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:04.445 Cannot find device "nvmf_tgt_br" 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:04.445 Cannot find device "nvmf_tgt_br2" 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:04.445 Cannot find device "nvmf_tgt_br" 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:04.445 Cannot find device "nvmf_tgt_br2" 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:04.445 19:55:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:04.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:04.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:04.445 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:04.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:04.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:17:04.704 00:17:04.704 --- 10.0.0.2 ping statistics --- 00:17:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.704 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:04.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:04.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:04.704 00:17:04.704 --- 10.0.0.3 ping statistics --- 00:17:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.704 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:04.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:04.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:04.704 00:17:04.704 --- 10.0.0.1 ping statistics --- 00:17:04.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:04.704 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:04.704 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=77776 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 77776 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77776 ']' 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:04.705 19:55:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.079 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=006591ecddf3423b6e34ee8519eaf5d9 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7R1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 006591ecddf3423b6e34ee8519eaf5d9 0 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 006591ecddf3423b6e34ee8519eaf5d9 0 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=006591ecddf3423b6e34ee8519eaf5d9 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7R1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7R1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.7R1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e882e2d987efaa4ef541c5a597a2bbba048411a3fe2fd607df4278678a10be7 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.O7r 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e882e2d987efaa4ef541c5a597a2bbba048411a3fe2fd607df4278678a10be7 3 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e882e2d987efaa4ef541c5a597a2bbba048411a3fe2fd607df4278678a10be7 3 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e882e2d987efaa4ef541c5a597a2bbba048411a3fe2fd607df4278678a10be7 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.O7r 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.O7r 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.O7r 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=464def375ec5cb0a7676a26840887334445eff461f4d30a3 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.D78 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 464def375ec5cb0a7676a26840887334445eff461f4d30a3 0 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 464def375ec5cb0a7676a26840887334445eff461f4d30a3 0 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=464def375ec5cb0a7676a26840887334445eff461f4d30a3 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.D78 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.D78 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.D78 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=984b722943ee0840d8fc33680e1582d0bbab7e59fe0abebd 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WOE 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 984b722943ee0840d8fc33680e1582d0bbab7e59fe0abebd 2 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 984b722943ee0840d8fc33680e1582d0bbab7e59fe0abebd 2 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=984b722943ee0840d8fc33680e1582d0bbab7e59fe0abebd 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WOE 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WOE 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WOE 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d9873bd06f3e7ff61a32b962daeeb3ae 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FzD 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d9873bd06f3e7ff61a32b962daeeb3ae 1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d9873bd06f3e7ff61a32b962daeeb3ae 1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d9873bd06f3e7ff61a32b962daeeb3ae 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FzD 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FzD 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FzD 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2137b62ec9f418970e2b8bf648fd78ee 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fwY 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2137b62ec9f418970e2b8bf648fd78ee 1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2137b62ec9f418970e2b8bf648fd78ee 1 00:17:06.080 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2137b62ec9f418970e2b8bf648fd78ee 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fwY 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fwY 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.fwY 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c8d8a5c2f12562506976cfe2ee68791ed48f8ab8eccee93f 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nNo 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c8d8a5c2f12562506976cfe2ee68791ed48f8ab8eccee93f 2 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c8d8a5c2f12562506976cfe2ee68791ed48f8ab8eccee93f 2 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c8d8a5c2f12562506976cfe2ee68791ed48f8ab8eccee93f 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:06.081 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nNo 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nNo 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.nNo 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7ef9befcd4b718d297dad3ddc2c6b632 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hS1 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7ef9befcd4b718d297dad3ddc2c6b632 0 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7ef9befcd4b718d297dad3ddc2c6b632 0 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7ef9befcd4b718d297dad3ddc2c6b632 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hS1 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hS1 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hS1 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5670b54b093acdadac6d222fc5999d874f8e59b272c7c9b3b89f8f043a899f96 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.npv 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5670b54b093acdadac6d222fc5999d874f8e59b272c7c9b3b89f8f043a899f96 3 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5670b54b093acdadac6d222fc5999d874f8e59b272c7c9b3b89f8f043a899f96 3 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5670b54b093acdadac6d222fc5999d874f8e59b272c7c9b3b89f8f043a899f96 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.npv 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.npv 00:17:06.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.npv 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 77776 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 77776 ']' 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.373 19:55:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7R1 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.O7r ]] 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.O7r 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.D78 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WOE ]] 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WOE 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.648 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FzD 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.fwY ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fwY 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.nNo 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hS1 ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hS1 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.npv 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:06.649 19:55:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:07.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:07.215 Waiting for block devices as requested 00:17:07.215 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.215 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:07.782 No valid GPT data, bailing 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:07.782 No valid GPT data, bailing 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:07.782 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:08.041 No valid GPT data, bailing 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:08.041 No valid GPT data, bailing 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid=69cdc0e8-4c23-4318-834b-1d87efff05de -a 10.0.0.1 -t tcp -s 4420 00:17:08.041 00:17:08.041 Discovery Log Number of Records 2, Generation counter 2 00:17:08.041 =====Discovery Log Entry 0====== 00:17:08.041 trtype: tcp 00:17:08.041 adrfam: ipv4 00:17:08.041 subtype: current discovery subsystem 00:17:08.041 treq: not specified, sq flow control disable supported 00:17:08.041 portid: 1 00:17:08.041 trsvcid: 4420 00:17:08.041 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:08.041 traddr: 10.0.0.1 00:17:08.041 eflags: none 00:17:08.041 sectype: none 00:17:08.041 =====Discovery Log Entry 1====== 00:17:08.041 trtype: tcp 00:17:08.041 adrfam: ipv4 00:17:08.041 subtype: nvme subsystem 00:17:08.041 treq: not specified, sq flow control disable supported 00:17:08.041 portid: 1 00:17:08.041 trsvcid: 4420 00:17:08.041 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:08.041 traddr: 10.0.0.1 00:17:08.041 eflags: none 00:17:08.041 sectype: none 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.041 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.300 nvme0n1 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.300 19:55:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.559 nvme0n1 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.559 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.818 nvme0n1 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.818 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.819 nvme0n1 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.819 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 nvme0n1 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.078 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.079 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.337 nvme0n1 00:17:09.337 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.337 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.338 19:55:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 nvme0n1 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.597 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.856 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.856 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.856 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.856 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.857 nvme0n1 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.857 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.116 nvme0n1 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.116 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.117 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.375 nvme0n1 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.375 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.376 19:55:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.376 nvme0n1 00:17:10.376 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.376 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.376 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.376 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.376 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.634 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.202 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.460 nvme0n1 00:17:11.460 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.460 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.460 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.461 19:55:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.719 nvme0n1 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.720 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.978 nvme0n1 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.978 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 nvme0n1 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.237 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.495 nvme0n1 00:17:12.495 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.495 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.495 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.495 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.495 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.495 19:55:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.495 19:55:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.395 19:55:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.653 nvme0n1 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.654 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.232 nvme0n1 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.232 19:55:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.490 nvme0n1 00:17:15.490 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.490 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.490 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.490 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.490 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.490 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.491 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.058 nvme0n1 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.058 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.059 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.317 nvme0n1 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.317 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.318 19:55:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.252 nvme0n1 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:17.252 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.253 19:55:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.819 nvme0n1 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.819 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.385 nvme0n1 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:18.385 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.386 19:55:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.956 nvme0n1 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.956 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.214 19:55:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.781 nvme0n1 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.781 nvme0n1 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.781 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 nvme0n1 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.040 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.299 nvme0n1 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.299 nvme0n1 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.299 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.557 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.558 19:55:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.558 nvme0n1 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.558 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.816 nvme0n1 00:17:20.816 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.816 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.816 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.816 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.817 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 nvme0n1 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.076 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.335 nvme0n1 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.335 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.336 nvme0n1 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.336 19:55:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.336 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.595 nvme0n1 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:21.595 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.596 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.854 nvme0n1 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.854 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.113 nvme0n1 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.113 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:22.371 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.372 nvme0n1 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.372 19:55:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.372 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.372 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:22.630 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.631 nvme0n1 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.631 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.890 nvme0n1 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.890 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.149 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.408 nvme0n1 00:17:23.408 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.408 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.408 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.408 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.408 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.408 19:55:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.408 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.409 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.975 nvme0n1 00:17:23.975 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.975 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.975 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.975 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.975 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.976 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.235 nvme0n1 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.235 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.494 19:55:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.756 nvme0n1 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.756 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.322 nvme0n1 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.323 19:55:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.893 nvme0n1 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.893 19:55:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.460 nvme0n1 00:17:26.460 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.460 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.460 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.460 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.460 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.461 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.395 nvme0n1 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.395 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.396 19:55:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.963 nvme0n1 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.963 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.964 19:55:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.532 nvme0n1 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.532 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.792 nvme0n1 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.792 nvme0n1 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.792 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.051 nvme0n1 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.051 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.052 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 nvme0n1 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.311 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.570 nvme0n1 00:17:29.570 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.570 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.570 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.570 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.570 19:55:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.570 nvme0n1 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.570 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.830 nvme0n1 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.830 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.831 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.090 nvme0n1 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.090 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.349 nvme0n1 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.349 19:55:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.349 nvme0n1 00:17:30.349 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.349 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.349 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.349 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.349 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.349 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.607 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.608 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.866 nvme0n1 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.866 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.867 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.867 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.867 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.867 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.867 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.867 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.125 nvme0n1 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:31.125 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.126 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.384 nvme0n1 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.384 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.385 19:55:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.644 nvme0n1 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.644 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.903 nvme0n1 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:31.903 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.904 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.162 nvme0n1 00:17:32.162 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.162 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.162 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.162 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.162 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.162 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.421 19:56:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.680 nvme0n1 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.680 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.305 nvme0n1 00:17:33.305 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.305 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.305 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.305 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.305 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.305 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.306 19:56:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.564 nvme0n1 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:33.564 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.565 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 nvme0n1 00:17:33.823 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.823 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.823 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.823 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.823 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.823 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA2NTkxZWNkZGYzNDIzYjZlMzRlZTg1MTllYWY1ZDnsDuzz: 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: ]] 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWU4ODJlMmQ5ODdlZmFhNGVmNTQxYzVhNTk3YTJiYmJhMDQ4NDExYTNmZTJmZDYwN2RmNDI3ODY3OGExMGJlN0lQgSM=: 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.082 19:56:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.649 nvme0n1 00:17:34.649 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.649 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.649 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.649 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.649 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.650 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.217 nvme0n1 00:17:35.217 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.217 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.217 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.217 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.217 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.217 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.475 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDk4NzNiZDA2ZjNlN2ZmNjFhMzJiOTYyZGFlZWIzYWWZAmsO: 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: ]] 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjEzN2I2MmVjOWY0MTg5NzBlMmI4YmY2NDhmZDc4ZWX8kUZX: 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.476 19:56:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.042 nvme0n1 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.042 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzhkOGE1YzJmMTI1NjI1MDY5NzZjZmUyZWU2ODc5MWVkNDhmOGFiOGVjY2VlOTNmoIqAtg==: 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: ]] 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2VmOWJlZmNkNGI3MThkMjk3ZGFkM2RkYzJjNmI2MzJSvVuu: 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.043 19:56:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.609 nvme0n1 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.609 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTY3MGI1NGIwOTNhY2RhZGFjNmQyMjJmYzU5OTlkODc0ZjhlNTliMjcyYzdjOWIzYjg5ZjhmMDQzYTg5OWY5Ni/bur4=: 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:36.906 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.907 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.495 nvme0n1 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:37.495 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDY0ZGVmMzc1ZWM1Y2IwYTc2NzZhMjY4NDA4ODczMzQ0NDVlZmY0NjFmNGQzMGEzaaCarw==: 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: ]] 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg0YjcyMjk0M2VlMDg0MGQ4ZmMzMzY4MGUxNTgyZDBiYmFiN2U1OWZlMGFiZWJkzQqkSQ==: 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.496 19:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.496 request: 00:17:37.496 { 00:17:37.496 "name": "nvme0", 00:17:37.496 "trtype": "tcp", 00:17:37.496 "traddr": "10.0.0.1", 00:17:37.496 "adrfam": "ipv4", 00:17:37.496 "trsvcid": "4420", 00:17:37.496 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:37.496 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:37.496 "prchk_reftag": false, 00:17:37.496 "prchk_guard": false, 00:17:37.496 "hdgst": false, 00:17:37.496 "ddgst": false, 00:17:37.496 "method": "bdev_nvme_attach_controller", 00:17:37.496 "req_id": 1 00:17:37.496 } 00:17:37.496 Got JSON-RPC error response 00:17:37.496 response: 00:17:37.496 { 00:17:37.496 "code": -5, 00:17:37.496 "message": "Input/output error" 00:17:37.496 } 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.496 request: 00:17:37.496 { 00:17:37.496 "name": "nvme0", 00:17:37.496 "trtype": "tcp", 00:17:37.496 "traddr": "10.0.0.1", 00:17:37.496 "adrfam": "ipv4", 00:17:37.496 "trsvcid": "4420", 00:17:37.496 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:37.496 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:37.496 "prchk_reftag": false, 00:17:37.496 "prchk_guard": false, 00:17:37.496 "hdgst": false, 00:17:37.496 "ddgst": false, 00:17:37.496 "dhchap_key": "key2", 00:17:37.496 "method": "bdev_nvme_attach_controller", 00:17:37.496 "req_id": 1 00:17:37.496 } 00:17:37.496 Got JSON-RPC error response 00:17:37.496 response: 00:17:37.496 { 00:17:37.496 "code": -5, 00:17:37.496 "message": "Input/output error" 00:17:37.496 } 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.496 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.753 request: 00:17:37.753 { 00:17:37.753 "name": "nvme0", 00:17:37.753 "trtype": "tcp", 00:17:37.753 "traddr": "10.0.0.1", 00:17:37.753 "adrfam": "ipv4", 00:17:37.753 "trsvcid": "4420", 00:17:37.753 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:37.753 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:37.753 "prchk_reftag": false, 00:17:37.753 "prchk_guard": false, 00:17:37.753 "hdgst": false, 00:17:37.753 "ddgst": false, 00:17:37.753 "dhchap_key": "key1", 00:17:37.753 "dhchap_ctrlr_key": "ckey2", 00:17:37.753 "method": "bdev_nvme_attach_controller", 00:17:37.753 "req_id": 1 00:17:37.753 } 00:17:37.753 Got JSON-RPC error response 00:17:37.753 response: 00:17:37.753 { 00:17:37.753 "code": -5, 00:17:37.753 "message": "Input/output error" 00:17:37.753 } 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.753 rmmod nvme_tcp 00:17:37.753 rmmod nvme_fabrics 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 77776 ']' 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 77776 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 77776 ']' 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 77776 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77776 00:17:37.753 killing process with pid 77776 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77776' 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 77776 00:17:37.753 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 77776 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:38.010 19:56:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:38.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.942 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:38.942 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:38.942 19:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.7R1 /tmp/spdk.key-null.D78 /tmp/spdk.key-sha256.FzD /tmp/spdk.key-sha384.nNo /tmp/spdk.key-sha512.npv /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:38.942 19:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:39.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:39.458 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.458 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:39.458 00:17:39.458 real 0m35.132s 00:17:39.458 user 0m31.387s 00:17:39.458 sys 0m3.686s 00:17:39.458 19:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:39.458 19:56:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.458 ************************************ 00:17:39.458 END TEST nvmf_auth_host 00:17:39.458 ************************************ 00:17:39.458 19:56:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:39.458 19:56:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:39.458 19:56:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:39.458 19:56:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:39.458 19:56:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.458 ************************************ 00:17:39.458 START TEST nvmf_digest 00:17:39.458 ************************************ 00:17:39.458 19:56:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:39.458 * Looking for test storage... 00:17:39.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:39.458 Cannot find device "nvmf_tgt_br" 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.458 Cannot find device "nvmf_tgt_br2" 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:39.458 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:39.717 Cannot find device "nvmf_tgt_br" 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:39.717 Cannot find device "nvmf_tgt_br2" 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.717 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.717 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.982 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.982 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:39.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:39.982 00:17:39.982 --- 10.0.0.2 ping statistics --- 00:17:39.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.983 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:39.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:39.983 00:17:39.983 --- 10.0.0.3 ping statistics --- 00:17:39.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.983 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:39.983 00:17:39.983 --- 10.0.0.1 ping statistics --- 00:17:39.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.983 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:39.983 ************************************ 00:17:39.983 START TEST nvmf_digest_clean 00:17:39.983 ************************************ 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:39.983 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=79352 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 79352 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79352 ']' 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.984 19:56:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:39.984 [2024-07-24 19:56:08.493802] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:17:39.984 [2024-07-24 19:56:08.493878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.984 [2024-07-24 19:56:08.627195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.313 [2024-07-24 19:56:08.732223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.313 [2024-07-24 19:56:08.732300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.313 [2024-07-24 19:56:08.732312] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.313 [2024-07-24 19:56:08.732321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.313 [2024-07-24 19:56:08.732329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.313 [2024-07-24 19:56:08.732361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.881 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.140 [2024-07-24 19:56:09.557103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:41.140 null0 00:17:41.140 [2024-07-24 19:56:09.605012] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.140 [2024-07-24 19:56:09.629131] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79384 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79384 /var/tmp/bperf.sock 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79384 ']' 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:41.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:41.140 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:41.140 [2024-07-24 19:56:09.680820] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:17:41.140 [2024-07-24 19:56:09.680888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79384 ] 00:17:41.398 [2024-07-24 19:56:09.817698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.398 [2024-07-24 19:56:09.924725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.398 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.398 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:41.398 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:41.398 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:41.398 19:56:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:41.674 [2024-07-24 19:56:10.225835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:41.674 19:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.674 19:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.932 nvme0n1 00:17:41.932 19:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:41.932 19:56:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:42.189 Running I/O for 2 seconds... 00:17:44.089 00:17:44.089 Latency(us) 00:17:44.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.089 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:44.089 nvme0n1 : 2.01 15032.81 58.72 0.00 0.00 8506.37 7923.90 18588.39 00:17:44.089 =================================================================================================================== 00:17:44.089 Total : 15032.81 58.72 0.00 0.00 8506.37 7923.90 18588.39 00:17:44.089 0 00:17:44.089 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:44.089 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:44.089 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:44.089 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:44.089 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:44.089 | select(.opcode=="crc32c") 00:17:44.089 | "\(.module_name) \(.executed)"' 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79384 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79384 ']' 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79384 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79384 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:44.348 killing process with pid 79384 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:44.348 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79384' 00:17:44.349 Received shutdown signal, test time was about 2.000000 seconds 00:17:44.349 00:17:44.349 Latency(us) 00:17:44.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.349 =================================================================================================================== 00:17:44.349 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.349 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79384 00:17:44.349 19:56:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79384 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79437 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79437 /var/tmp/bperf.sock 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79437 ']' 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.608 19:56:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:44.608 [2024-07-24 19:56:13.252336] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:17:44.608 [2024-07-24 19:56:13.252413] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79437 ] 00:17:44.608 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:44.608 Zero copy mechanism will not be used. 00:17:44.867 [2024-07-24 19:56:13.384662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.867 [2024-07-24 19:56:13.495659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.801 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.801 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:45.801 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:45.801 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:45.801 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:46.059 [2024-07-24 19:56:14.505443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:46.059 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.059 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.318 nvme0n1 00:17:46.318 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:46.318 19:56:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:46.576 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.576 Zero copy mechanism will not be used. 00:17:46.576 Running I/O for 2 seconds... 00:17:48.516 00:17:48.517 Latency(us) 00:17:48.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.517 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:48.517 nvme0n1 : 2.00 7601.83 950.23 0.00 0.00 2101.30 1869.27 3336.38 00:17:48.517 =================================================================================================================== 00:17:48.517 Total : 7601.83 950.23 0.00 0.00 2101.30 1869.27 3336.38 00:17:48.517 0 00:17:48.517 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:48.517 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:48.517 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:48.517 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:48.517 | select(.opcode=="crc32c") 00:17:48.517 | "\(.module_name) \(.executed)"' 00:17:48.517 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79437 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79437 ']' 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79437 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79437 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:48.775 killing process with pid 79437 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79437' 00:17:48.775 Received shutdown signal, test time was about 2.000000 seconds 00:17:48.775 00:17:48.775 Latency(us) 00:17:48.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.775 =================================================================================================================== 00:17:48.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79437 00:17:48.775 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79437 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79492 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79492 /var/tmp/bperf.sock 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79492 ']' 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:49.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:49.034 19:56:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:49.034 [2024-07-24 19:56:17.686602] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:17:49.034 [2024-07-24 19:56:17.686698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79492 ] 00:17:49.292 [2024-07-24 19:56:17.822685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.550 [2024-07-24 19:56:17.968316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.116 19:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.117 19:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:50.117 19:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:50.117 19:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:50.117 19:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:50.374 [2024-07-24 19:56:18.893795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:50.374 19:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.374 19:56:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.631 nvme0n1 00:17:50.631 19:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:50.631 19:56:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:50.889 Running I/O for 2 seconds... 00:17:52.793 00:17:52.793 Latency(us) 00:17:52.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.793 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.793 nvme0n1 : 2.01 16323.72 63.76 0.00 0.00 7834.83 7268.54 14715.81 00:17:52.793 =================================================================================================================== 00:17:52.793 Total : 16323.72 63.76 0.00 0.00 7834.83 7268.54 14715.81 00:17:52.793 0 00:17:52.793 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:52.793 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:52.793 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:52.793 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:52.793 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:52.793 | select(.opcode=="crc32c") 00:17:52.793 | "\(.module_name) \(.executed)"' 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79492 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79492 ']' 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79492 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79492 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79492' 00:17:53.052 killing process with pid 79492 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79492 00:17:53.052 Received shutdown signal, test time was about 2.000000 seconds 00:17:53.052 00:17:53.052 Latency(us) 00:17:53.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.052 =================================================================================================================== 00:17:53.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.052 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79492 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79552 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79552 /var/tmp/bperf.sock 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 79552 ']' 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.310 19:56:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:53.310 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:53.310 Zero copy mechanism will not be used. 00:17:53.310 [2024-07-24 19:56:21.952673] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:17:53.310 [2024-07-24 19:56:21.952789] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79552 ] 00:17:53.569 [2024-07-24 19:56:22.090726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.569 [2024-07-24 19:56:22.204552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.504 19:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:54.504 19:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:17:54.504 19:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:54.504 19:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:54.504 19:56:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:54.763 [2024-07-24 19:56:23.217958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:54.763 19:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:54.763 19:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:55.021 nvme0n1 00:17:55.021 19:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:55.021 19:56:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:55.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:55.280 Zero copy mechanism will not be used. 00:17:55.280 Running I/O for 2 seconds... 00:17:57.181 00:17:57.182 Latency(us) 00:17:57.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.182 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:57.182 nvme0n1 : 2.00 6282.78 785.35 0.00 0.00 2539.62 1809.69 7030.23 00:17:57.182 =================================================================================================================== 00:17:57.182 Total : 6282.78 785.35 0.00 0.00 2539.62 1809.69 7030.23 00:17:57.182 0 00:17:57.182 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:57.182 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:57.182 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:57.182 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:57.182 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:57.182 | select(.opcode=="crc32c") 00:17:57.182 | "\(.module_name) \(.executed)"' 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79552 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79552 ']' 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79552 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79552 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:57.440 killing process with pid 79552 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79552' 00:17:57.440 Received shutdown signal, test time was about 2.000000 seconds 00:17:57.440 00:17:57.440 Latency(us) 00:17:57.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.440 =================================================================================================================== 00:17:57.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79552 00:17:57.440 19:56:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79552 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79352 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 79352 ']' 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 79352 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79352 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.699 killing process with pid 79352 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79352' 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 79352 00:17:57.699 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 79352 00:17:57.958 00:17:57.958 real 0m18.030s 00:17:57.958 user 0m34.580s 00:17:57.958 sys 0m4.795s 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:57.958 ************************************ 00:17:57.958 END TEST nvmf_digest_clean 00:17:57.958 ************************************ 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:57.958 ************************************ 00:17:57.958 START TEST nvmf_digest_error 00:17:57.958 ************************************ 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=79641 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 79641 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79641 ']' 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.958 19:56:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:57.958 [2024-07-24 19:56:26.578168] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:17:57.958 [2024-07-24 19:56:26.578279] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.218 [2024-07-24 19:56:26.714282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.218 [2024-07-24 19:56:26.825247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.218 [2024-07-24 19:56:26.825322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.218 [2024-07-24 19:56:26.825349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.218 [2024-07-24 19:56:26.825358] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.218 [2024-07-24 19:56:26.825366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.218 [2024-07-24 19:56:26.825400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.152 [2024-07-24 19:56:27.561909] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.152 [2024-07-24 19:56:27.622444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:59.152 null0 00:17:59.152 [2024-07-24 19:56:27.670658] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.152 [2024-07-24 19:56:27.694806] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79673 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79673 /var/tmp/bperf.sock 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79673 ']' 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.152 19:56:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:59.152 [2024-07-24 19:56:27.768205] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:17:59.152 [2024-07-24 19:56:27.768367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79673 ] 00:17:59.411 [2024-07-24 19:56:27.915581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.411 [2024-07-24 19:56:28.038883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.669 [2024-07-24 19:56:28.096694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:00.236 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.236 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:00.236 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:00.236 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:00.503 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:00.503 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.503 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.503 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.503 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:00.503 19:56:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:00.796 nvme0n1 00:18:00.796 19:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:00.796 19:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.796 19:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:00.796 19:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.796 19:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:00.796 19:56:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:00.796 Running I/O for 2 seconds... 00:18:00.796 [2024-07-24 19:56:29.457404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:00.796 [2024-07-24 19:56:29.457475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.796 [2024-07-24 19:56:29.457493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.475053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.475135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.475169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.492093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.492160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.492176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.508869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.508916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.508932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.525780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.525843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.525876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.542318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.542381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.542413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.558800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.558863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.558895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.575358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.575440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.575474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.591939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.592005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.592038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.608560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.608608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.608624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.625011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.625074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.625090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.642208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.642271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.642288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.659616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.659678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.659710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.676859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.676921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.676953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.693541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.693602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.693635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.056 [2024-07-24 19:56:29.709913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.056 [2024-07-24 19:56:29.709976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.056 [2024-07-24 19:56:29.709992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.727042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.727103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.727135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.744505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.744552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.744568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.761136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.761198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.761230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.777849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.777912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.777927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.794885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.794949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.794966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.811709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.811763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.811779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.828668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.828722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.828750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.845612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.845658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.845674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.862511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.862556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.862571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.879388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.879437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.879453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.896487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.315 [2024-07-24 19:56:29.896537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.315 [2024-07-24 19:56:29.896554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.315 [2024-07-24 19:56:29.913298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.316 [2024-07-24 19:56:29.913362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.316 [2024-07-24 19:56:29.913394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.316 [2024-07-24 19:56:29.930234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.316 [2024-07-24 19:56:29.930297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.316 [2024-07-24 19:56:29.930330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.316 [2024-07-24 19:56:29.946964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.316 [2024-07-24 19:56:29.947029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.316 [2024-07-24 19:56:29.947061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.316 [2024-07-24 19:56:29.963637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.316 [2024-07-24 19:56:29.963681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.316 [2024-07-24 19:56:29.963697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.316 [2024-07-24 19:56:29.980345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.316 [2024-07-24 19:56:29.980391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.316 [2024-07-24 19:56:29.980406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:29.997116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:29.997176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:29.997208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.013414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.013463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.013494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.030437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.030502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.030518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.047668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.047785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.047804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.064629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.064725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.064768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.081381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.081443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.081476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.098204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.098264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.098297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.114959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.115019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.115052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.131511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.131557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.131589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.148051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.148113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.148145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.164555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.164602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.164617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.181647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.181711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.181726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.198806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.198872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.198888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.215861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.215908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.215924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.575 [2024-07-24 19:56:30.233094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.575 [2024-07-24 19:56:30.233172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.575 [2024-07-24 19:56:30.233221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.250264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.250327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.250343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.267452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.267518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.267534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.284632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.284680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.284696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.301602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.301650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.301666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.319008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.319055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.319071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.336130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.336195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.336211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.353499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.353560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.353592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.370134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.370196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.370228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.386705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.386797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.386814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.403244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.403306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.403321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.419711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.419787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.419820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.436492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.436537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.436553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.453830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.453888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.453920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.471174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.471220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.471236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:01.835 [2024-07-24 19:56:30.488570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:01.835 [2024-07-24 19:56:30.488617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:01.835 [2024-07-24 19:56:30.488632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.505586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.505651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.505683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.530011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.530078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.530110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.546955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.547052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.547085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.563735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.563839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.563874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.580068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.580162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.580195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.596738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.596841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.596874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.613448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.613545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.613579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.630064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.630145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.630178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.646215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.646276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.646308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.662987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.663071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.663104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.679637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.679701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.679734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.695795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.695867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.094 [2024-07-24 19:56:30.695900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.094 [2024-07-24 19:56:30.712865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.094 [2024-07-24 19:56:30.712970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.095 [2024-07-24 19:56:30.713004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.095 [2024-07-24 19:56:30.730502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.095 [2024-07-24 19:56:30.730582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.095 [2024-07-24 19:56:30.730599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.095 [2024-07-24 19:56:30.747831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.095 [2024-07-24 19:56:30.747899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.095 [2024-07-24 19:56:30.747931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.353 [2024-07-24 19:56:30.765621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.353 [2024-07-24 19:56:30.765701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.353 [2024-07-24 19:56:30.765717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.353 [2024-07-24 19:56:30.783373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.353 [2024-07-24 19:56:30.783420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.353 [2024-07-24 19:56:30.783435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.353 [2024-07-24 19:56:30.800844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.353 [2024-07-24 19:56:30.800890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.800923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.817604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.817680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.817697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.834533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.834610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.834628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.851414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.851493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.851510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.868157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.868232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.868249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.885152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.885205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.885222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.902489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.902551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.902568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.920583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.920684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.920702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.938490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.938573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.938590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.955952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.956018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.956035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.973132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.973200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.973216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:30.990674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:30.990771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:30.990789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.354 [2024-07-24 19:56:31.008418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.354 [2024-07-24 19:56:31.008500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.354 [2024-07-24 19:56:31.008518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.025732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.025808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.025825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.042762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.042827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.042843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.060070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.060151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.060169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.077293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.077365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.077382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.094785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.094884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.094901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.111623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.111708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.111724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.128456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.128518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.128535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.145612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.145732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.145763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.162641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.162751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.162770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.179835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.179929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.179954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.197232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.197345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.197363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.214355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.214453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.214471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.231730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.231797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.231815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.249605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.249669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.249685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.614 [2024-07-24 19:56:31.267460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.614 [2024-07-24 19:56:31.267551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.614 [2024-07-24 19:56:31.267568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.285306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.285399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.285424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.302402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.302486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.302503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.319970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.320037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.320054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.337577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.337663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.337695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.355573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.355660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.355687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.373507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.373621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.373638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.390651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.390715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.390731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.407822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.407892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.407926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.424323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.875 [2024-07-24 19:56:31.424370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.875 [2024-07-24 19:56:31.424386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.875 [2024-07-24 19:56:31.440471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24dc4f0) 00:18:02.876 [2024-07-24 19:56:31.440521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:02.876 [2024-07-24 19:56:31.440537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:02.876 00:18:02.876 Latency(us) 00:18:02.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.876 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:02.876 nvme0n1 : 2.01 14854.34 58.02 0.00 0.00 8608.81 7804.74 32887.16 00:18:02.876 =================================================================================================================== 00:18:02.876 Total : 14854.34 58.02 0.00 0.00 8608.81 7804.74 32887.16 00:18:02.876 0 00:18:02.876 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:02.876 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:02.876 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:02.876 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:02.876 | .driver_specific 00:18:02.876 | .nvme_error 00:18:02.876 | .status_code 00:18:02.876 | .command_transient_transport_error' 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79673 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79673 ']' 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79673 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79673 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:03.133 killing process with pid 79673 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79673' 00:18:03.133 Received shutdown signal, test time was about 2.000000 seconds 00:18:03.133 00:18:03.133 Latency(us) 00:18:03.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.133 =================================================================================================================== 00:18:03.133 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79673 00:18:03.133 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79673 00:18:03.391 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:03.391 19:56:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79728 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79728 /var/tmp/bperf.sock 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79728 ']' 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.391 19:56:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:03.391 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.391 Zero copy mechanism will not be used. 00:18:03.391 [2024-07-24 19:56:32.054757] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:18:03.391 [2024-07-24 19:56:32.054878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79728 ] 00:18:03.717 [2024-07-24 19:56:32.194246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.717 [2024-07-24 19:56:32.306728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.717 [2024-07-24 19:56:32.360873] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.673 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.929 nvme0n1 00:18:04.930 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:04.930 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.930 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.188 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.188 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:05.188 19:56:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:05.188 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:05.188 Zero copy mechanism will not be used. 00:18:05.188 Running I/O for 2 seconds... 00:18:05.188 [2024-07-24 19:56:33.729777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.188 [2024-07-24 19:56:33.729841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.188 [2024-07-24 19:56:33.729860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.188 [2024-07-24 19:56:33.734604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.188 [2024-07-24 19:56:33.734676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.188 [2024-07-24 19:56:33.734693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.188 [2024-07-24 19:56:33.739503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.188 [2024-07-24 19:56:33.739565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.188 [2024-07-24 19:56:33.739581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.188 [2024-07-24 19:56:33.744114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.188 [2024-07-24 19:56:33.744176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.188 [2024-07-24 19:56:33.744193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.188 [2024-07-24 19:56:33.748920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.188 [2024-07-24 19:56:33.748983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.188 [2024-07-24 19:56:33.748999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.188 [2024-07-24 19:56:33.753582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.188 [2024-07-24 19:56:33.753645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.188 [2024-07-24 19:56:33.753660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.188 [2024-07-24 19:56:33.758316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.188 [2024-07-24 19:56:33.758380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.188 [2024-07-24 19:56:33.758396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.188 [2024-07-24 19:56:33.762966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.188 [2024-07-24 19:56:33.763012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.763027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.767660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.767725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.767757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.772259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.772335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.772351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.777026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.777090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.777106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.781729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.781804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.781820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.786472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.786521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.786536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.791288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.791353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.791369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.796112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.796159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.796175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.800897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.800960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.800976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.805446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.805510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.805526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.810181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.810240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.810272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.814985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.815046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.815062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.819556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.819620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.819652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.824284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.824374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.824390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.829038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.829100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.829132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.833811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.833874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.833906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.838610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.838675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.838708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.843363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.843430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.843446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.848055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.848117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.848149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.852659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.852766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.852784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.189 [2024-07-24 19:56:33.857399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.189 [2024-07-24 19:56:33.857457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.189 [2024-07-24 19:56:33.857490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.862016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.862078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.862111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.866572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.866636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.866668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.871267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.871330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.871362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.876183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.876229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.876251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.881034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.881096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.881112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.885840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.885902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.885918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.890740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.890829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.890862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.895609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.895671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.895704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.900229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.900291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.449 [2024-07-24 19:56:33.900333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.449 [2024-07-24 19:56:33.904885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.449 [2024-07-24 19:56:33.904947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.904979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.909569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.909635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.909650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.914308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.914371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.914403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.918938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.919001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.919017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.923599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.923662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.923678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.928201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.928264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.928296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.932818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.932879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.932910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.937534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.937598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.937615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.942364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.942427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.942443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.947042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.947119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.947152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.951720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.951795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.951828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.956338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.956385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.956399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.960989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.961051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.961083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.965568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.965633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.965648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.970314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.970377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.970409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.975058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.975137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.975169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.979765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.979842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.979874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.984520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.984569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.984584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.989209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.989272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.989304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.993882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.450 [2024-07-24 19:56:33.993943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.450 [2024-07-24 19:56:33.993976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.450 [2024-07-24 19:56:33.998663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:33.998727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:33.998773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.003497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.003592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.003624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.008211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.008275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.008290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.012899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.012960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.012993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.017603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.017666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.017682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.022298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.022361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.022392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.026907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.026969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.026985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.031566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.031630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.031662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.036191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.036255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.036287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.040931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.040992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.041025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.045769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.045842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.045875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.050428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.050491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.050523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.055161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.055223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.055256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.059884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.059946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.059978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.064520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.064568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.064583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.069114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.069177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.069209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.073750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.073824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.073856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.078393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.078455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.078487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.083172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.083234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.083266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.087890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.087949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.087980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.092367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.092433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.092450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.097124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.097185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.097216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.101880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.101942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.101975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.106407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.106476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.106492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.451 [2024-07-24 19:56:34.111309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.451 [2024-07-24 19:56:34.111356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.451 [2024-07-24 19:56:34.111371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.452 [2024-07-24 19:56:34.116038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.452 [2024-07-24 19:56:34.116099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.452 [2024-07-24 19:56:34.116130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.121119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.121180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.121196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.125952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.126011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.126043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.130898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.130944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.130960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.135715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.135805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.135838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.140593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.140640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.140656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.145259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.145321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.145337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.150142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.150205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.150236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.155012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.155077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.155093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.159939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.160016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.160048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.164744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.164819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.164843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.169614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.169662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.169677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.174438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.174485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.174501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.179337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.179385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.179401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.184179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.184247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.184263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.188995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.189046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.189061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.193712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.193776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.193792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.198541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.198597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.712 [2024-07-24 19:56:34.198613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.712 [2024-07-24 19:56:34.203401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.712 [2024-07-24 19:56:34.203452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.203467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.208239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.208316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.208334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.212912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.212978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.213010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.217629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.217696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.217711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.222409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.222473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.222506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.227173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.227239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.227271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.231914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.231979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.232012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.236651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.236732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.236778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.241442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.241507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.241523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.246234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.246299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.246332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.250903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.250980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.250997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.255613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.255679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.255712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.260470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.260520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.260544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.265189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.265256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.265289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.269934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.270001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.270033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.274527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.274591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.274625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.279224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.279290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.279323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.284007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.284053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.284069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.288924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.288973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.288988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.293719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.293779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.293796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.298536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.298601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.298617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.303443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.303507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.303524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.308344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.308393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.308410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.313285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.313366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.313383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.318034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.318081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.318096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.323007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.323059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.323074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.327828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.713 [2024-07-24 19:56:34.327875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.713 [2024-07-24 19:56:34.327891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.713 [2024-07-24 19:56:34.332691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.332770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.332788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.337369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.337435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.337451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.342207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.342256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.342271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.346880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.346945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.346978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.351539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.351606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.351639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.356270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.356371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.356389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.361270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.361340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.361356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.366250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.366303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.366319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.371086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.371137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.371153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.714 [2024-07-24 19:56:34.375943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.714 [2024-07-24 19:56:34.375996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.714 [2024-07-24 19:56:34.376011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.380859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.380922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.380937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.385717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.385797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.385813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.390649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.390698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.390713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.395391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.395439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.395454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.400203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.400266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.400299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.404983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.405047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.405062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.409556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.409620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.409636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.414245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.414307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.414339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.419061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.419125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.419141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.423642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.423704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.423737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.428291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.428365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.428380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.432877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.432939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.432955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.437510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.437570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.437585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.442125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.442186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.442219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.974 [2024-07-24 19:56:34.446800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.974 [2024-07-24 19:56:34.446867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.974 [2024-07-24 19:56:34.446899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.451637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.451698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.451729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.456269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.456359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.456375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.461044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.461154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.465728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.465819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.465835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.470572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.470617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.470633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.475362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.475407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.475422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.480251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.480323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.480339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.485130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.485175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.485190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.489685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.489731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.489761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.494461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.494508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.494523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.499370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.499417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.499432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.504222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.504267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.504282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.508933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.508978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.508993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.513575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.513622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.513638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.518323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.518370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.522996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.523043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.523058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.527740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.527836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.532551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.532599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.532614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.537351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.537398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.537414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.542071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.542134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.542149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.546930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.546976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.546991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.551682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.551757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.551774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.556380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.556427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.975 [2024-07-24 19:56:34.556442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.975 [2024-07-24 19:56:34.561238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.975 [2024-07-24 19:56:34.561285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.561299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.566145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.566210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.566226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.570901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.570968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.570983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.575529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.575592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.575607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.580157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.580219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.580235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.584915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.584978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.584993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.589569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.589618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.589633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.594151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.594197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.594212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.598796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.598839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.598854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.603528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.603574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.603589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.608290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.608348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.608363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.612963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.613009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.613024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.617673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.617720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.617749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.622334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.622380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.622396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.627070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.627117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.627133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.631839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.631886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.631901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.636418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.636463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.636478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.976 [2024-07-24 19:56:34.641133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:05.976 [2024-07-24 19:56:34.641180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.976 [2024-07-24 19:56:34.641195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.645772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.645818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.645833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.650425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.650472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.650486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.655064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.655111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.655125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.659602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.659649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.659664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.664280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.664335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.664351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.669061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.669107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.669123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.673824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.673873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.673887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.678593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.678641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.678657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.683180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.683244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.683260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.687912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.687958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.687973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.692671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.692734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.692765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.697412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.697475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.697491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.702256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.702336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.702351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.707060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.707107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.707122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.711800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.711847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.711862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.716444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.716491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.716506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.721277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.721341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.721358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.725984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.726046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.726061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.730750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.730825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.730841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.735690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.735761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.735777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.740669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.740733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.740765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.745436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.745485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.745500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.750111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.750173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.750188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.755029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.234 [2024-07-24 19:56:34.755076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.234 [2024-07-24 19:56:34.755092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.234 [2024-07-24 19:56:34.760349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.760395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.760410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.765337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.765382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.765397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.770159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.770218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.770234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.775069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.775119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.775135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.779800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.779846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.779862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.784557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.784613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.784629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.789368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.789415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.789430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.794143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.794205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.794221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.798874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.798935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.798950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.803700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.803773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.803789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.808426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.808473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.808488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.813216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.813263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.813279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.817962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.818009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.818024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.822682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.822756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.822774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.827584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.827630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.827646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.832389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.832434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.832449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.837283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.837346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.837363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.842169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.842230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.842261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.847012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.847074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.847105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.851833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.851894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.851910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.856636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.856732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.856764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.861416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.861477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.861509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.866139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.866201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.866232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.870821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.870881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.870914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.875477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.875554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.875586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.880236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.880300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.880326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.884957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.885019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.885034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.889624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.889686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.889718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.894285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.235 [2024-07-24 19:56:34.894347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.235 [2024-07-24 19:56:34.894379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.235 [2024-07-24 19:56:34.898945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.236 [2024-07-24 19:56:34.899007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.236 [2024-07-24 19:56:34.899039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.236 [2024-07-24 19:56:34.903613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.494 [2024-07-24 19:56:34.903675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.494 [2024-07-24 19:56:34.903708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.908276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.908365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.908380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.912955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.913016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.913049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.917621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.917683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.917715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.922345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.922407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.922439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.926991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.927052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.927084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.931794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.931856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.931872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.936538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.936585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.936600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.941196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.941257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.941290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.945895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.945941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.945957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.950578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.950625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.950640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.955289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.955335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.955351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.959905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.959966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.959981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.964657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.964750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.964767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.969380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.969427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.969443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.974099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.974147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.974161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.978735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.978810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.978827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.983504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.983568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.983583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.988191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.988254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.988269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.993084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.993130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.993146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:34.997772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:34.997817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:34.997832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:35.002503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:35.002566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:35.002582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:35.007263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:35.007328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:35.007343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:35.011894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:35.011957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:35.011973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:35.016574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:35.016621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:35.016636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:35.021298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:35.021361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:35.021377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:35.025992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.495 [2024-07-24 19:56:35.026055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.495 [2024-07-24 19:56:35.026070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.495 [2024-07-24 19:56:35.030735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.030810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.030842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.035416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.035478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.035511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.040122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.040186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.044832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.044895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.044928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.049527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.049589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.049620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.054296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.054360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.054375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.059161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.059222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.059255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.063839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.063901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.063933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.068473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.068522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.068537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.073236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.073297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.073330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.077821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.077883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.077898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.082443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.082506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.082521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.087208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.087271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.087303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.091949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.092010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.092041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.096738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.096814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.096847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.101459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.101523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.101540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.106220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.106284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.106318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.110970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.111034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.111066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.115596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.115661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.115694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.120289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.120365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.120381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.125057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.125121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.125137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.129909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.129972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.129988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.134926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.135008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.135024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.139812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.139877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.139894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.144747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.144821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.144842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.149588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.149636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.149652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.496 [2024-07-24 19:56:35.154618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.496 [2024-07-24 19:56:35.154668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.496 [2024-07-24 19:56:35.154684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.497 [2024-07-24 19:56:35.159616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.497 [2024-07-24 19:56:35.159667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.497 [2024-07-24 19:56:35.159682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.164690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.164750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.164767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.169633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.169687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.169704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.174533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.174582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.174597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.179358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.179409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.179430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.184077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.184127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.184143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.188752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.188800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.188816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.193498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.193549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.193565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.198281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.198333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.198348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.202957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.203014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.203031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.207773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.207854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.207870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.212721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.212784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.212800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.217424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.217474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.217490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.222241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.222307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.222322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.227130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.227178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.227193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.232138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.232204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.232219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.236999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.237070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.237086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.241806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.241869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.241885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.246565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.246615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.246630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.251182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.251246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.251279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.255897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.255961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.255995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.260511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.260561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.260577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.265193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.265253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.265285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.269881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.269945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.269961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.274406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.274473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.274488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.279012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.279077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.279109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.283872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.283938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.283954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.288587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.288642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.288657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.293227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.293294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.293327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.297938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.298004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.298021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.302547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.302612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.302628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.307363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.307427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.307444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.312161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.312205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.312221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.316938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.316987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.317004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.321812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.321867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.321884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.326689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.326791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.326808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.331627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.331688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.331721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.336671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.336728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.336772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.341667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.341773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.341790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.346546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.346607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.346623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.351421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.351481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.351514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.356150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.356213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.356244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.360817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.360877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.360893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.365637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.365698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.365730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.370288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.370350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.370365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.374958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.375019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.375051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.379544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.379606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.379638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.384353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.384406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.756 [2024-07-24 19:56:35.384421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.756 [2024-07-24 19:56:35.389148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.756 [2024-07-24 19:56:35.389195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.757 [2024-07-24 19:56:35.389210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.757 [2024-07-24 19:56:35.393839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.757 [2024-07-24 19:56:35.393882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.757 [2024-07-24 19:56:35.393898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.757 [2024-07-24 19:56:35.398521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.757 [2024-07-24 19:56:35.398570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.757 [2024-07-24 19:56:35.398585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.757 [2024-07-24 19:56:35.403338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.757 [2024-07-24 19:56:35.403387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.757 [2024-07-24 19:56:35.403403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.757 [2024-07-24 19:56:35.408132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.757 [2024-07-24 19:56:35.408196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.757 [2024-07-24 19:56:35.408228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.757 [2024-07-24 19:56:35.412821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.757 [2024-07-24 19:56:35.412883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.757 [2024-07-24 19:56:35.412899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.757 [2024-07-24 19:56:35.417695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.757 [2024-07-24 19:56:35.417789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.757 [2024-07-24 19:56:35.417806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.757 [2024-07-24 19:56:35.422445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:06.757 [2024-07-24 19:56:35.422509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.757 [2024-07-24 19:56:35.422524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.427067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.427130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.427162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.431688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.431777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.431794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.436260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.436332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.436349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.440849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.440895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.440911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.445529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.445592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.445608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.450201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.450263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.450296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.454935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.454998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.455014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.459593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.459659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.459674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.464211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.464276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.464292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.469003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.469050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.016 [2024-07-24 19:56:35.469065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.016 [2024-07-24 19:56:35.473696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.016 [2024-07-24 19:56:35.473771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.473804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.478449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.478495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.478527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.483190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.483236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.483267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.487882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.487927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.487959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.492477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.492525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.492540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.497235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.497282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.497331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.502154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.502203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.502218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.506853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.506900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.506916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.511565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.511621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.511636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.516356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.516414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.516430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.521213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.521258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.521273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.526132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.526179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.526194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.530902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.530947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.530962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.535528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.535576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.535591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.540201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.540248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.540264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.545017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.545063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.545078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.549935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.549981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.549997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.554591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.554638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.554653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.559482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.559530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.559546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.564324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.564372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.564387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.569270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.569317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.569333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.574188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.574235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.574250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.578939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.578985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.579000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.583598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.583644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.583674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.588502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.588549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.588564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.593572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.593618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.593633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.017 [2024-07-24 19:56:35.598361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.017 [2024-07-24 19:56:35.598406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.017 [2024-07-24 19:56:35.598421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.602983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.603029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.603043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.607623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.607667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.607681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.612237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.612282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.612297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.616911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.616956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.616971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.621512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.621556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.621571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.626133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.626178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.626192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.630553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.630601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.630616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.635171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.635216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.635230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.639716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.639770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.639786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.644366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.644412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.644427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.648915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.648960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.648974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.653491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.653536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.653550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.658260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.658324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.658339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.662916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.662961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.662975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.667679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.667724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.667754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.672341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.672387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.672403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.677113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.677179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.677194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.018 [2024-07-24 19:56:35.681939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.018 [2024-07-24 19:56:35.681984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.018 [2024-07-24 19:56:35.681999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.277 [2024-07-24 19:56:35.686494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.277 [2024-07-24 19:56:35.686541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.277 [2024-07-24 19:56:35.686556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.277 [2024-07-24 19:56:35.691166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.277 [2024-07-24 19:56:35.691211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.277 [2024-07-24 19:56:35.691225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.277 [2024-07-24 19:56:35.695804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.277 [2024-07-24 19:56:35.695851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.277 [2024-07-24 19:56:35.695866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.277 [2024-07-24 19:56:35.700455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.277 [2024-07-24 19:56:35.700502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.277 [2024-07-24 19:56:35.700517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.277 [2024-07-24 19:56:35.705154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.277 [2024-07-24 19:56:35.705201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.277 [2024-07-24 19:56:35.705217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.277 [2024-07-24 19:56:35.709860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.278 [2024-07-24 19:56:35.709906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.278 [2024-07-24 19:56:35.709921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:07.278 [2024-07-24 19:56:35.714671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.278 [2024-07-24 19:56:35.714717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.278 [2024-07-24 19:56:35.714733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.278 [2024-07-24 19:56:35.719467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.278 [2024-07-24 19:56:35.719512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.278 [2024-07-24 19:56:35.719526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.278 [2024-07-24 19:56:35.724187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x66c200) 00:18:07.278 [2024-07-24 19:56:35.724251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:07.278 [2024-07-24 19:56:35.724274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:07.278 00:18:07.278 Latency(us) 00:18:07.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.278 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:07.278 nvme0n1 : 2.00 6526.09 815.76 0.00 0.00 2447.70 2070.34 5391.83 00:18:07.278 =================================================================================================================== 00:18:07.278 Total : 6526.09 815.76 0.00 0.00 2447.70 2070.34 5391.83 00:18:07.278 0 00:18:07.278 19:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:07.278 19:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:07.278 19:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:07.278 19:56:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:07.278 | .driver_specific 00:18:07.278 | .nvme_error 00:18:07.278 | .status_code 00:18:07.278 | .command_transient_transport_error' 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79728 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79728 ']' 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79728 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79728 00:18:07.537 killing process with pid 79728 00:18:07.537 Received shutdown signal, test time was about 2.000000 seconds 00:18:07.537 00:18:07.537 Latency(us) 00:18:07.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.537 =================================================================================================================== 00:18:07.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79728' 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79728 00:18:07.537 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79728 00:18:07.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79795 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79795 /var/tmp/bperf.sock 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79795 ']' 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.796 19:56:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.796 [2024-07-24 19:56:36.399481] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:18:07.796 [2024-07-24 19:56:36.399890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79795 ] 00:18:08.054 [2024-07-24 19:56:36.532747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.054 [2024-07-24 19:56:36.648255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.054 [2024-07-24 19:56:36.700972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:08.989 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.990 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:09.247 nvme0n1 00:18:09.506 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:09.506 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.506 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:09.506 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.506 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:09.506 19:56:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:09.506 Running I/O for 2 seconds... 00:18:09.506 [2024-07-24 19:56:38.100145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fef90 00:18:09.506 [2024-07-24 19:56:38.102845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.506 [2024-07-24 19:56:38.102893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.506 [2024-07-24 19:56:38.116896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190feb58 00:18:09.506 [2024-07-24 19:56:38.119525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.506 [2024-07-24 19:56:38.119571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:09.506 [2024-07-24 19:56:38.133443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fe2e8 00:18:09.506 [2024-07-24 19:56:38.135991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.506 [2024-07-24 19:56:38.136033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:09.506 [2024-07-24 19:56:38.149911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fda78 00:18:09.506 [2024-07-24 19:56:38.152519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.506 [2024-07-24 19:56:38.152576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:09.506 [2024-07-24 19:56:38.166366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fd208 00:18:09.506 [2024-07-24 19:56:38.168898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.506 [2024-07-24 19:56:38.168953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.181815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fc998 00:18:09.765 [2024-07-24 19:56:38.184186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.184226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.197696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fc128 00:18:09.765 [2024-07-24 19:56:38.200117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.200159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.213879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fb8b8 00:18:09.765 [2024-07-24 19:56:38.216272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.216323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.230150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fb048 00:18:09.765 [2024-07-24 19:56:38.232536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.232581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.246291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fa7d8 00:18:09.765 [2024-07-24 19:56:38.248643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.248688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.262473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f9f68 00:18:09.765 [2024-07-24 19:56:38.264881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.264932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.278951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f96f8 00:18:09.765 [2024-07-24 19:56:38.281291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.281340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.295421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f8e88 00:18:09.765 [2024-07-24 19:56:38.297822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.297890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.311869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f8618 00:18:09.765 [2024-07-24 19:56:38.314226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.314294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.328495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f7da8 00:18:09.765 [2024-07-24 19:56:38.330849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.765 [2024-07-24 19:56:38.330913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:09.765 [2024-07-24 19:56:38.344736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f7538 00:18:09.765 [2024-07-24 19:56:38.346977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.766 [2024-07-24 19:56:38.347022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:09.766 [2024-07-24 19:56:38.360930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f6cc8 00:18:09.766 [2024-07-24 19:56:38.363209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.766 [2024-07-24 19:56:38.363275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:09.766 [2024-07-24 19:56:38.377684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f6458 00:18:09.766 [2024-07-24 19:56:38.379936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.766 [2024-07-24 19:56:38.379983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:09.766 [2024-07-24 19:56:38.394565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f5be8 00:18:09.766 [2024-07-24 19:56:38.396848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.766 [2024-07-24 19:56:38.396901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:09.766 [2024-07-24 19:56:38.411018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f5378 00:18:09.766 [2024-07-24 19:56:38.413175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.766 [2024-07-24 19:56:38.413222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:09.766 [2024-07-24 19:56:38.427149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f4b08 00:18:09.766 [2024-07-24 19:56:38.429417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.766 [2024-07-24 19:56:38.429468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.443728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f4298 00:18:10.025 [2024-07-24 19:56:38.445962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.446018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.460630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f3a28 00:18:10.025 [2024-07-24 19:56:38.462836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.462890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.477786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f31b8 00:18:10.025 [2024-07-24 19:56:38.479938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.479984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.493974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f2948 00:18:10.025 [2024-07-24 19:56:38.496083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.496125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.510334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f20d8 00:18:10.025 [2024-07-24 19:56:38.512495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.512546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.526820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f1868 00:18:10.025 [2024-07-24 19:56:38.528921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.528991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.543228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f0ff8 00:18:10.025 [2024-07-24 19:56:38.545343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.545407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.559010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f0788 00:18:10.025 [2024-07-24 19:56:38.561037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.561100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.574333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190eff18 00:18:10.025 [2024-07-24 19:56:38.576369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.576412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.591315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ef6a8 00:18:10.025 [2024-07-24 19:56:38.593361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.593414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.608558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190eee38 00:18:10.025 [2024-07-24 19:56:38.610676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.610767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.625443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ee5c8 00:18:10.025 [2024-07-24 19:56:38.627424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.627469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.641062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190edd58 00:18:10.025 [2024-07-24 19:56:38.642897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.642951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.656702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ed4e8 00:18:10.025 [2024-07-24 19:56:38.658583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.658643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.672443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ecc78 00:18:10.025 [2024-07-24 19:56:38.674346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.674407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:10.025 [2024-07-24 19:56:38.688462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ec408 00:18:10.025 [2024-07-24 19:56:38.690372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.025 [2024-07-24 19:56:38.690432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:10.284 [2024-07-24 19:56:38.704215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ebb98 00:18:10.284 [2024-07-24 19:56:38.706186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.706248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.719943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190eb328 00:18:10.285 [2024-07-24 19:56:38.721750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.721819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.736102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190eaab8 00:18:10.285 [2024-07-24 19:56:38.737980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.738032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.752814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ea248 00:18:10.285 [2024-07-24 19:56:38.754692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.754783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.769548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e99d8 00:18:10.285 [2024-07-24 19:56:38.771413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.771476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.785825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e9168 00:18:10.285 [2024-07-24 19:56:38.787620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.787685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.802121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e88f8 00:18:10.285 [2024-07-24 19:56:38.803958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.804009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.818451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e8088 00:18:10.285 [2024-07-24 19:56:38.820327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.820381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.834636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e7818 00:18:10.285 [2024-07-24 19:56:38.836417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.836467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.851152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e6fa8 00:18:10.285 [2024-07-24 19:56:38.853483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.853543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.867654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e6738 00:18:10.285 [2024-07-24 19:56:38.869340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.869388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.884021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e5ec8 00:18:10.285 [2024-07-24 19:56:38.885686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.885766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.899799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e5658 00:18:10.285 [2024-07-24 19:56:38.901491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.901586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.915608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e4de8 00:18:10.285 [2024-07-24 19:56:38.917277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.917342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.931432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e4578 00:18:10.285 [2024-07-24 19:56:38.933087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.933152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:10.285 [2024-07-24 19:56:38.947525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e3d08 00:18:10.285 [2024-07-24 19:56:38.949175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.285 [2024-07-24 19:56:38.949239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:38.963220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e3498 00:18:10.544 [2024-07-24 19:56:38.964847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:38.964892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:38.978856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e2c28 00:18:10.544 [2024-07-24 19:56:38.980375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:38.980422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:38.994219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e23b8 00:18:10.544 [2024-07-24 19:56:38.995660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:38.995713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.009703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e1b48 00:18:10.544 [2024-07-24 19:56:39.011138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:39.011191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.025652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e12d8 00:18:10.544 [2024-07-24 19:56:39.027122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:39.027175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.041502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e0a68 00:18:10.544 [2024-07-24 19:56:39.042926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:39.042965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.056792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e01f8 00:18:10.544 [2024-07-24 19:56:39.058225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:39.058296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.072080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190df988 00:18:10.544 [2024-07-24 19:56:39.073444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:39.073500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.087766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190df118 00:18:10.544 [2024-07-24 19:56:39.089115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:39.089173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.103542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190de8a8 00:18:10.544 [2024-07-24 19:56:39.104927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:39.104968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.119450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190de038 00:18:10.544 [2024-07-24 19:56:39.120789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.544 [2024-07-24 19:56:39.120840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:10.544 [2024-07-24 19:56:39.141594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190de038 00:18:10.544 [2024-07-24 19:56:39.144091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.545 [2024-07-24 19:56:39.144145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:10.545 [2024-07-24 19:56:39.157121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190de8a8 00:18:10.545 [2024-07-24 19:56:39.159591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.545 [2024-07-24 19:56:39.159644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:10.545 [2024-07-24 19:56:39.172847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190df118 00:18:10.545 [2024-07-24 19:56:39.175264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.545 [2024-07-24 19:56:39.175317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:10.545 [2024-07-24 19:56:39.188414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190df988 00:18:10.545 [2024-07-24 19:56:39.190855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.545 [2024-07-24 19:56:39.190909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:10.545 [2024-07-24 19:56:39.204106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e01f8 00:18:10.545 [2024-07-24 19:56:39.206631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.545 [2024-07-24 19:56:39.206669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:10.803 [2024-07-24 19:56:39.220212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e0a68 00:18:10.803 [2024-07-24 19:56:39.222690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.803 [2024-07-24 19:56:39.222769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:10.803 [2024-07-24 19:56:39.236521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e12d8 00:18:10.803 [2024-07-24 19:56:39.239018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.803 [2024-07-24 19:56:39.239072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:10.803 [2024-07-24 19:56:39.252533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e1b48 00:18:10.803 [2024-07-24 19:56:39.254965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.803 [2024-07-24 19:56:39.255018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:10.803 [2024-07-24 19:56:39.268139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e23b8 00:18:10.803 [2024-07-24 19:56:39.270524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.803 [2024-07-24 19:56:39.270577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:10.803 [2024-07-24 19:56:39.283880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e2c28 00:18:10.803 [2024-07-24 19:56:39.286238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.803 [2024-07-24 19:56:39.286279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:10.803 [2024-07-24 19:56:39.300153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e3498 00:18:10.803 [2024-07-24 19:56:39.302479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.803 [2024-07-24 19:56:39.302519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:10.803 [2024-07-24 19:56:39.316388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e3d08 00:18:10.803 [2024-07-24 19:56:39.318674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.803 [2024-07-24 19:56:39.318714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:10.803 [2024-07-24 19:56:39.332361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e4578 00:18:10.804 [2024-07-24 19:56:39.334623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.334664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:10.804 [2024-07-24 19:56:39.348252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e4de8 00:18:10.804 [2024-07-24 19:56:39.350558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.350613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:10.804 [2024-07-24 19:56:39.364143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e5658 00:18:10.804 [2024-07-24 19:56:39.366412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.366466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:10.804 [2024-07-24 19:56:39.380039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e5ec8 00:18:10.804 [2024-07-24 19:56:39.382222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.382279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:10.804 [2024-07-24 19:56:39.396024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e6738 00:18:10.804 [2024-07-24 19:56:39.398211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.398249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:10.804 [2024-07-24 19:56:39.411949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e6fa8 00:18:10.804 [2024-07-24 19:56:39.414108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.414150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:10.804 [2024-07-24 19:56:39.428095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e7818 00:18:10.804 [2024-07-24 19:56:39.430414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.430456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:10.804 [2024-07-24 19:56:39.444926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e8088 00:18:10.804 [2024-07-24 19:56:39.447063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.447104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:10.804 [2024-07-24 19:56:39.461292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e88f8 00:18:10.804 [2024-07-24 19:56:39.463500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:10.804 [2024-07-24 19:56:39.463540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:11.062 [2024-07-24 19:56:39.477311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e9168 00:18:11.062 [2024-07-24 19:56:39.479407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.062 [2024-07-24 19:56:39.479445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:11.062 [2024-07-24 19:56:39.493463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190e99d8 00:18:11.062 [2024-07-24 19:56:39.495575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.062 [2024-07-24 19:56:39.495615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:11.062 [2024-07-24 19:56:39.509768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ea248 00:18:11.062 [2024-07-24 19:56:39.511856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.062 [2024-07-24 19:56:39.511910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:11.062 [2024-07-24 19:56:39.525703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190eaab8 00:18:11.062 [2024-07-24 19:56:39.527759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.062 [2024-07-24 19:56:39.527797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:11.062 [2024-07-24 19:56:39.541638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190eb328 00:18:11.062 [2024-07-24 19:56:39.543647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.062 [2024-07-24 19:56:39.543686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:11.062 [2024-07-24 19:56:39.557718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ebb98 00:18:11.062 [2024-07-24 19:56:39.559727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.062 [2024-07-24 19:56:39.559777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:11.062 [2024-07-24 19:56:39.573640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ec408 00:18:11.062 [2024-07-24 19:56:39.575617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.062 [2024-07-24 19:56:39.575656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.589610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ecc78 00:18:11.063 [2024-07-24 19:56:39.591566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:14089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.591604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.605530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ed4e8 00:18:11.063 [2024-07-24 19:56:39.607469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.607507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.621636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190edd58 00:18:11.063 [2024-07-24 19:56:39.623619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.623656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.637933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ee5c8 00:18:11.063 [2024-07-24 19:56:39.639866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.639906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.653927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190eee38 00:18:11.063 [2024-07-24 19:56:39.655795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.655834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.669897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190ef6a8 00:18:11.063 [2024-07-24 19:56:39.671741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.671805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.685729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190eff18 00:18:11.063 [2024-07-24 19:56:39.687593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.687647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.701368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f0788 00:18:11.063 [2024-07-24 19:56:39.703199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.703253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:11.063 [2024-07-24 19:56:39.717147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f0ff8 00:18:11.063 [2024-07-24 19:56:39.718986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.063 [2024-07-24 19:56:39.719040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:11.321 [2024-07-24 19:56:39.733385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f1868 00:18:11.321 [2024-07-24 19:56:39.735241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.321 [2024-07-24 19:56:39.735293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:11.321 [2024-07-24 19:56:39.749535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f20d8 00:18:11.321 [2024-07-24 19:56:39.751342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.321 [2024-07-24 19:56:39.751380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:11.321 [2024-07-24 19:56:39.765265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f2948 00:18:11.321 [2024-07-24 19:56:39.767030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.321 [2024-07-24 19:56:39.767067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:11.321 [2024-07-24 19:56:39.781034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f31b8 00:18:11.321 [2024-07-24 19:56:39.782771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.321 [2024-07-24 19:56:39.782833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:11.321 [2024-07-24 19:56:39.796878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f3a28 00:18:11.321 [2024-07-24 19:56:39.798580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.321 [2024-07-24 19:56:39.798618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:11.321 [2024-07-24 19:56:39.812727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f4298 00:18:11.321 [2024-07-24 19:56:39.814417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.321 [2024-07-24 19:56:39.814456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:11.321 [2024-07-24 19:56:39.828539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f4b08 00:18:11.321 [2024-07-24 19:56:39.830254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.321 [2024-07-24 19:56:39.830291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:11.321 [2024-07-24 19:56:39.844242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f5378 00:18:11.321 [2024-07-24 19:56:39.845951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.321 [2024-07-24 19:56:39.845989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.859974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f5be8 00:18:11.322 [2024-07-24 19:56:39.861607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.861646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.875687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f6458 00:18:11.322 [2024-07-24 19:56:39.877326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.877369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.891572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f6cc8 00:18:11.322 [2024-07-24 19:56:39.893216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.893257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.907447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f7538 00:18:11.322 [2024-07-24 19:56:39.909072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.909113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.923096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f7da8 00:18:11.322 [2024-07-24 19:56:39.924706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.924775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.938799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f8618 00:18:11.322 [2024-07-24 19:56:39.940290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.940354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.954410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f8e88 00:18:11.322 [2024-07-24 19:56:39.955958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.955996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.970508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f96f8 00:18:11.322 [2024-07-24 19:56:39.972120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.972159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:11.322 [2024-07-24 19:56:39.986702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190f9f68 00:18:11.322 [2024-07-24 19:56:39.988225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.322 [2024-07-24 19:56:39.988264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:11.580 [2024-07-24 19:56:40.003200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fa7d8 00:18:11.580 [2024-07-24 19:56:40.004789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-07-24 19:56:40.004837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:11.580 [2024-07-24 19:56:40.019171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fb048 00:18:11.580 [2024-07-24 19:56:40.020681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-07-24 19:56:40.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:11.580 [2024-07-24 19:56:40.035207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fb8b8 00:18:11.580 [2024-07-24 19:56:40.036644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-07-24 19:56:40.036711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:11.580 [2024-07-24 19:56:40.051031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fc128 00:18:11.580 [2024-07-24 19:56:40.052438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.580 [2024-07-24 19:56:40.052478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:11.580 [2024-07-24 19:56:40.066806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fc998 00:18:11.581 [2024-07-24 19:56:40.068214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-07-24 19:56:40.068253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:11.581 [2024-07-24 19:56:40.082719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1831650) with pdu=0x2000190fd208 00:18:11.581 [2024-07-24 19:56:40.084093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:11.581 [2024-07-24 19:56:40.084133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:11.581 00:18:11.581 Latency(us) 00:18:11.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.581 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:11.581 nvme0n1 : 2.00 15775.70 61.62 0.00 0.00 8106.19 2442.71 30146.56 00:18:11.581 =================================================================================================================== 00:18:11.581 Total : 15775.70 61.62 0.00 0.00 8106.19 2442.71 30146.56 00:18:11.581 0 00:18:11.581 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:11.581 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:11.581 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:11.581 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:11.581 | .driver_specific 00:18:11.581 | .nvme_error 00:18:11.581 | .status_code 00:18:11.581 | .command_transient_transport_error' 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79795 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79795 ']' 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79795 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79795 00:18:11.839 killing process with pid 79795 00:18:11.839 Received shutdown signal, test time was about 2.000000 seconds 00:18:11.839 00:18:11.839 Latency(us) 00:18:11.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.839 =================================================================================================================== 00:18:11.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79795' 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79795 00:18:11.839 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79795 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79854 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79854 /var/tmp/bperf.sock 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 79854 ']' 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:12.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.097 19:56:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:12.097 [2024-07-24 19:56:40.686768] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:18:12.097 [2024-07-24 19:56:40.687034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79854 ] 00:18:12.097 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:12.097 Zero copy mechanism will not be used. 00:18:12.356 [2024-07-24 19:56:40.827421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.356 [2024-07-24 19:56:40.936943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.356 [2024-07-24 19:56:40.991032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.290 19:56:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:13.860 nvme0n1 00:18:13.860 19:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:13.860 19:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.860 19:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:13.860 19:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.860 19:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:13.860 19:56:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:13.860 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:13.860 Zero copy mechanism will not be used. 00:18:13.861 Running I/O for 2 seconds... 00:18:13.861 [2024-07-24 19:56:42.395096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.395419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.395452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.400205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.400478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.400825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.405601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.405700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.405725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.410632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.410717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.410741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.415569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.415655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.415680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.420521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.420619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.420661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.425518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.425603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.425628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.430375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.430464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.430488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.435351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.435457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.435481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.440465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.440706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.440890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.445617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.445893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.446067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.450702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.450979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.451253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.456166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.456445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.456634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.461460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.461711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.461911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.466713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.466956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.467078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.472165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.472418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.472654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.477536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.477862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.478098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.482977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.483252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.483535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.488338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.488573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.488767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.493536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.493793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.493973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.498797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.499037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.499063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.504260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.504550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.504804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.509483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.509643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.509669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.514608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.514819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.514845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.519661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.519861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.519886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.524597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.861 [2024-07-24 19:56:42.524769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.861 [2024-07-24 19:56:42.524809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.861 [2024-07-24 19:56:42.529734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:13.862 [2024-07-24 19:56:42.529930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.862 [2024-07-24 19:56:42.529955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.534493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.534677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.534702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.539615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.539947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.539991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.544614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.544686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.544711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.549618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.549694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.549719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.554895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.554980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.555005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.560055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.560158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.560183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.565464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.565537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.565562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.570797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.570887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.570912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.575667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.575757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.575813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.580920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.581008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.581033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.586022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.586112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.121 [2024-07-24 19:56:42.586137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.121 [2024-07-24 19:56:42.591175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.121 [2024-07-24 19:56:42.591258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.591283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.596185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.596256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.596282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.601300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.601373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.601399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.606359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.606434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.606458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.611338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.611426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.611451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.616185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.616276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.616300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.621116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.621231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.621256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.626009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.626109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.626134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.630906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.631038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.631062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.635461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.635659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.635683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.640412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.640727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.640769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.645269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.645380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.645405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.650132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.650215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.650239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.655086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.655170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.655194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.660022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.660111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.660136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.665084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.665174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.665202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.670387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.670461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.670488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.675509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.675586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.675614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.680638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.680765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.680824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.685784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.685888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.685914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.690948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.691038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.691065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.696116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.696211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.696236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.701240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.701342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.701368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.706206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.706293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.706318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.711124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.711213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.711238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.716083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.716193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.716218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.721076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.122 [2024-07-24 19:56:42.721174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.122 [2024-07-24 19:56:42.721198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.122 [2024-07-24 19:56:42.725980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.726141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.726166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.730823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.730937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.730962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.735740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.735899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.735924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.740681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.740878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.740904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.745813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.745992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.746016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.750951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.751093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.751118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.756091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.756272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.756296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.761429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.761576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.761602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.766641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.766861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.766887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.771318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.771513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.771538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.776203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.776533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.776568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.781101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.781208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.781233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.123 [2024-07-24 19:56:42.786100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.123 [2024-07-24 19:56:42.786193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.123 [2024-07-24 19:56:42.786218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.791130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.791213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.791238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.796202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.796293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.796329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.801359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.801434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.801459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.806409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.806506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.806531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.811426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.811525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.811549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.816383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.816456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.816481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.821430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.821514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.821538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.826476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.826567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.826592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.831588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.831660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.831685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.836640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.836761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.836787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.841669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.841770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.841811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.846525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.846618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.846643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.851563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.851650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.851673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.856529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.856645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.856670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.861616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.861715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.861739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.866730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.866848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.866873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.871620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.871730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.871770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.876847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.876928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.876954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.882012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.882088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.882113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.887155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.384 [2024-07-24 19:56:42.887227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.384 [2024-07-24 19:56:42.887253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.384 [2024-07-24 19:56:42.892239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.892346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.892371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.897536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.897607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.897631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.902741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.902864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.902889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.907899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.908014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.908040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.913109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.913178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.913203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.918168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.918274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.918314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.923162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.923320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.923344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.928280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.928476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.928508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.933340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.933486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.933526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.938389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.938557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.938581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.943111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.943291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.943322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.948653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.948995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.949028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.953917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.954191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.954222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.958877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.958949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.958973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.963921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.963991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.964016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.969002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.969072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.969097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.974050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.974122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.974147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.978982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.979053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.979077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.984027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.984113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.984136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.989002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.989078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.989104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.994104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.994182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.994207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:42.999136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:42.999216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:42.999241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:43.004088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:43.004185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:43.004210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:43.009063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:43.009172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:43.009197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:43.014105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:43.014190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:43.014215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:43.019249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:43.019319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:43.019346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:43.024265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.385 [2024-07-24 19:56:43.024388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.385 [2024-07-24 19:56:43.024414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.385 [2024-07-24 19:56:43.029319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.386 [2024-07-24 19:56:43.029394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.386 [2024-07-24 19:56:43.029421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.386 [2024-07-24 19:56:43.034403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.386 [2024-07-24 19:56:43.034479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.386 [2024-07-24 19:56:43.034506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.386 [2024-07-24 19:56:43.039458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.386 [2024-07-24 19:56:43.039536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.386 [2024-07-24 19:56:43.039563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.386 [2024-07-24 19:56:43.044598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.386 [2024-07-24 19:56:43.044692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.386 [2024-07-24 19:56:43.044718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.386 [2024-07-24 19:56:43.049552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.386 [2024-07-24 19:56:43.049659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.386 [2024-07-24 19:56:43.049685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.054616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.054706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.054732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.059590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.059714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.059754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.064680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.064809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.064833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.069829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.069973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.069998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.074871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.075017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.075043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.079985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.080130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.080154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.084789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.084978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.085002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.089778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.090095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.090127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.094749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.094838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.094864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.099895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.099992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.100018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.104906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.104977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.105002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.109898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.109970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.109995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.115064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.115138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.115163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.120023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.120110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.120135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.125117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.125191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.125217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.130148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.130224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.130249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.135071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.135160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.135184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.140073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.140149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.140174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.646 [2024-07-24 19:56:43.145169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.646 [2024-07-24 19:56:43.145243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.646 [2024-07-24 19:56:43.145268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.150139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.150227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.150252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.155132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.155216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.155241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.160216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.160314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.160352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.165319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.165433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.165458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.170537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.170635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.170659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.175508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.175592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.175616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.180619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.180692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.180717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.185668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.185777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.185837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.190875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.190964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.190989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.196158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.196227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.196253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.201205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.201281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.201306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.206345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.206455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.206480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.211538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.211655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.211681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.216871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.217017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.217041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.221557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.221759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.221799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.226679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.227021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.227058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.231787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.231871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.231897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.236913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.237001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.237027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.242072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.242160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.242185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.247356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.247429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.247455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.252360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.252434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.252459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.257386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.257485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.257509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.262480] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.262583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.262607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.267466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.267565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.267589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.272521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.272593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.272619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.277612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.647 [2024-07-24 19:56:43.277702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.647 [2024-07-24 19:56:43.277726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.647 [2024-07-24 19:56:43.282668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.648 [2024-07-24 19:56:43.282817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.648 [2024-07-24 19:56:43.282843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.648 [2024-07-24 19:56:43.287896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.648 [2024-07-24 19:56:43.287971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.648 [2024-07-24 19:56:43.287996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.648 [2024-07-24 19:56:43.293075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.648 [2024-07-24 19:56:43.293188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.648 [2024-07-24 19:56:43.293212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.648 [2024-07-24 19:56:43.298281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.648 [2024-07-24 19:56:43.298366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.648 [2024-07-24 19:56:43.298391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.648 [2024-07-24 19:56:43.303545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.648 [2024-07-24 19:56:43.303633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.648 [2024-07-24 19:56:43.303669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.648 [2024-07-24 19:56:43.308840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.648 [2024-07-24 19:56:43.308925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.648 [2024-07-24 19:56:43.308949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.648 [2024-07-24 19:56:43.314094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.648 [2024-07-24 19:56:43.314182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.648 [2024-07-24 19:56:43.314206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.319235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.319339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.319364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.324545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.324617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.324643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.329863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.329954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.329980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.335106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.335196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.335221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.340252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.340365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.340390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.345334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.345422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.345447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.350681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.350872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.350910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.355974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.356129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.356153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.360875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.361073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.361104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.365938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.366238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.366269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.370974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.371082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.371107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.376161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.376241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.376265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.381626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.381735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.381761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.386829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.386918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.392129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.392232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.392256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.397175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.397260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.397285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.402581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.402890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.403111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.407794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.407879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.407905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.412690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.412835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.412861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.417660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.417766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.417808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.422561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.422662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.422688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.427590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.427678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.907 [2024-07-24 19:56:43.427719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.907 [2024-07-24 19:56:43.432542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.907 [2024-07-24 19:56:43.432693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.432719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.437608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.437698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.437724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.442606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.442710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.442735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.447593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.447685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.447711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.452551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.452649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.452690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.457597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.457697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.457722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.462551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.462643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.462668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.467479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.467586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.467611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.472363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.472441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.472477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.477630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.477719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.477764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.483009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.483095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.483122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.488097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.488225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.488252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.493289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.493413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.493439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.498135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.498326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.498351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.503327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.503630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.503663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.508404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.508487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.508516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.513703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.513845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.513872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.519032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.519121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.519163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.524290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.524436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.524463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.529489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.529575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.529600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.534578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.534666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.534691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.539708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.539855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.539882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.544859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.544949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.544973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.549801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.549901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.549928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.554732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.554892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.554919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.560156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.560250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.560276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.565417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.565511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.565538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.570646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.570731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.570776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.908 [2024-07-24 19:56:43.575906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:14.908 [2024-07-24 19:56:43.576007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.908 [2024-07-24 19:56:43.576033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.581010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.581111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.581138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.586213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.586319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.586345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.591610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.591748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.591774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.597024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.597134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.597160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.602272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.602361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.602386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.607278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.607394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.607420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.612358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.612450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.612478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.617412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.617526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.617550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.622587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.622749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.622775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.627614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.627810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.627836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.632370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.632560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.632587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.637383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.637689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.637723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.642914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.643257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.643290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.648460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.648771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.648806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.653467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.653560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.653587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.658632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.658712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.658738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.663853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.663930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.663957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.668923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.669011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.669037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.674245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.674347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.674374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.679583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.679729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.679759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.169 [2024-07-24 19:56:43.685066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.169 [2024-07-24 19:56:43.685185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.169 [2024-07-24 19:56:43.685212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.690473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.690563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.690592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.695889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.695980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.696007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.701165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.701260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.701288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.706522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.706628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.706655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.711787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.711888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.711915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.717062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.717180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.717208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.722189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.722287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.722314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.727115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.727201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.727226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.732010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.732099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.732124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.736996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.737100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.737125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.741971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.742094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.742122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.746892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.746991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.747016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.751915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.752053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.752077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.756769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.756996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.757021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.761271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.761458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.761481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.766083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.766391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.766424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.771056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.771350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.771381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.775672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.775807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.775832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.780562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.780670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.780694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.785365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.785448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.785472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.790272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.790371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.790395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.795095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.795197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.795222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.799980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.800060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.800084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.804924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.805011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.805036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.809656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.809743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.809813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.814528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.814614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.170 [2024-07-24 19:56:43.814638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.170 [2024-07-24 19:56:43.819417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.170 [2024-07-24 19:56:43.819509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.171 [2024-07-24 19:56:43.819533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.171 [2024-07-24 19:56:43.824426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.171 [2024-07-24 19:56:43.824513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.171 [2024-07-24 19:56:43.824541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.171 [2024-07-24 19:56:43.829350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.171 [2024-07-24 19:56:43.829461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.171 [2024-07-24 19:56:43.829488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.171 [2024-07-24 19:56:43.834308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.171 [2024-07-24 19:56:43.834413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.171 [2024-07-24 19:56:43.834437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.839160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.839259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.839282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.844015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.844098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.844121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.848821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.848907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.848931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.853712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.853838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.853863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.858504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.858601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.858624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.863369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.863454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.863478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.868125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.868223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.868247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.873116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.873199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.873223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.878142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.878278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.878318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.883032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.883207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.883231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.888098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.888190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.888214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.893143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.893327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.893352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.897972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.898151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.898190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.902549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.902736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.431 [2024-07-24 19:56:43.902760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.431 [2024-07-24 19:56:43.907412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.431 [2024-07-24 19:56:43.907719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.907764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.912171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.912263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.912290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.917127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.917231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.917257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.921981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.922065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.922090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.926893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.926994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.927018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.931765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.931849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.931874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.936615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.936734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.936758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.941541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.941628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.941651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.946372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.946457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.946480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.951281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.951371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.951395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.956170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.956257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.956280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.961066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.961149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.961173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.965902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.966011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.966035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.970837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.970936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.970960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.975633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.975740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.975797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.980522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.980623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.980665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.985447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.985547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.985572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.990449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.990544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.990572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:43.995387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:43.995487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:43.995511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:44.000177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:44.000331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:44.000373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:44.005121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:44.005221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:44.005244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:44.009930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:44.010044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:44.010069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:44.014718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:44.014872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:44.014896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:44.019555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:44.019725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:44.019748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:44.024573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.432 [2024-07-24 19:56:44.024820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.432 [2024-07-24 19:56:44.024844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.432 [2024-07-24 19:56:44.029147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.029349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.029374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.033979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.034288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.034320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.039077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.039370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.039402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.043869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.043953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.043976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.048584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.048697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.048721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.053491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.053575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.053599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.058358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.058458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.058482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.063190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.063273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.063298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.068026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.068112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.068135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.072936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.073036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.073060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.077863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.077934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.077959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.083037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.083159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.083198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.088139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.088223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.088248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.093378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.093452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.093478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.433 [2024-07-24 19:56:44.098552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.433 [2024-07-24 19:56:44.098651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.433 [2024-07-24 19:56:44.098678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.103835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.694 [2024-07-24 19:56:44.103907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.694 [2024-07-24 19:56:44.103932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.109059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.694 [2024-07-24 19:56:44.109183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.694 [2024-07-24 19:56:44.109207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.114166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.694 [2024-07-24 19:56:44.114285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.694 [2024-07-24 19:56:44.114327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.119390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.694 [2024-07-24 19:56:44.119491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.694 [2024-07-24 19:56:44.119517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.124546] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.694 [2024-07-24 19:56:44.124721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.694 [2024-07-24 19:56:44.124749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.129549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.694 [2024-07-24 19:56:44.129669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.694 [2024-07-24 19:56:44.129695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.134556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.694 [2024-07-24 19:56:44.134646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.694 [2024-07-24 19:56:44.134672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.139699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.694 [2024-07-24 19:56:44.139868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.694 [2024-07-24 19:56:44.139895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.694 [2024-07-24 19:56:44.144798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.144999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.145024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.149292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.149478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.149502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.154287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.154572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.154605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.159042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.159165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.159189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.163890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.163977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.164001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.168880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.168977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.169001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.173821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.173929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.173955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.178907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.179005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.179035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.184222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.184303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.184349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.189395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.189488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.189519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.194529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.194632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.194664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.199653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.199759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.199790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.204733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.204839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.204869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.209725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.209836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.209865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.214967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.215057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.215084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.220080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.220166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.220193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.225236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.225309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.225336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.230285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.230410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.230436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.695 [2024-07-24 19:56:44.235280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.695 [2024-07-24 19:56:44.235382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.695 [2024-07-24 19:56:44.235408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.240259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.240363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.240390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.245346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.245423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.245449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.250405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.250563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.250590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.255430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.255558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.255583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.260540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.260620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.260646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.265796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.265986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.266012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.270453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.270646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.270670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.275587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.275894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.275926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.280536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.280631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.280656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.285490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.285579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.285604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.290461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.290556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.290583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.295461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.295556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.295585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.300598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.300724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.300752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.305755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.305865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.305892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.310721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.310880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.310910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.315700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.315838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.315885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.320712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.320837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.320866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.325851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.325957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.325984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.330939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.331030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.331057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.335964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.336052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.696 [2024-07-24 19:56:44.336079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.696 [2024-07-24 19:56:44.341122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.696 [2024-07-24 19:56:44.341199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.697 [2024-07-24 19:56:44.341226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.697 [2024-07-24 19:56:44.346246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.697 [2024-07-24 19:56:44.346343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.697 [2024-07-24 19:56:44.346368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.697 [2024-07-24 19:56:44.351272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.697 [2024-07-24 19:56:44.351352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.697 [2024-07-24 19:56:44.351379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.697 [2024-07-24 19:56:44.356430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.697 [2024-07-24 19:56:44.356526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.697 [2024-07-24 19:56:44.356552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.697 [2024-07-24 19:56:44.361458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.697 [2024-07-24 19:56:44.361550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.697 [2024-07-24 19:56:44.361575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.956 [2024-07-24 19:56:44.366564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.956 [2024-07-24 19:56:44.366650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.956 [2024-07-24 19:56:44.366678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:15.956 [2024-07-24 19:56:44.371660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.956 [2024-07-24 19:56:44.371762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.956 [2024-07-24 19:56:44.371806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:15.956 [2024-07-24 19:56:44.376871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.956 [2024-07-24 19:56:44.376983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.956 [2024-07-24 19:56:44.377009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:15.956 [2024-07-24 19:56:44.381898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19f5080) with pdu=0x2000190fef90 00:18:15.956 [2024-07-24 19:56:44.382053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:15.956 [2024-07-24 19:56:44.382079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:15.956 00:18:15.956 Latency(us) 00:18:15.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.956 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:15.956 nvme0n1 : 2.00 6111.42 763.93 0.00 0.00 2610.84 1794.79 10247.45 00:18:15.956 =================================================================================================================== 00:18:15.956 Total : 6111.42 763.93 0.00 0.00 2610.84 1794.79 10247.45 00:18:15.956 0 00:18:15.956 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:15.956 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:15.956 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:15.956 | .driver_specific 00:18:15.956 | .nvme_error 00:18:15.956 | .status_code 00:18:15.956 | .command_transient_transport_error' 00:18:15.956 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:16.213 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 394 > 0 )) 00:18:16.213 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79854 00:18:16.213 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79854 ']' 00:18:16.213 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79854 00:18:16.213 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:16.213 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.213 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79854 00:18:16.213 killing process with pid 79854 00:18:16.213 Received shutdown signal, test time was about 2.000000 seconds 00:18:16.213 00:18:16.213 Latency(us) 00:18:16.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.214 =================================================================================================================== 00:18:16.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.214 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:16.214 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:16.214 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79854' 00:18:16.214 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79854 00:18:16.214 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79854 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79641 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 79641 ']' 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 79641 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79641 00:18:16.501 killing process with pid 79641 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:16.501 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79641' 00:18:16.502 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 79641 00:18:16.502 19:56:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 79641 00:18:16.793 ************************************ 00:18:16.793 END TEST nvmf_digest_error 00:18:16.793 ************************************ 00:18:16.793 00:18:16.793 real 0m18.642s 00:18:16.793 user 0m35.915s 00:18:16.793 sys 0m4.957s 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:16.793 rmmod nvme_tcp 00:18:16.793 rmmod nvme_fabrics 00:18:16.793 rmmod nvme_keyring 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 79641 ']' 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 79641 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 79641 ']' 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 79641 00:18:16.793 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79641) - No such process 00:18:16.793 Process with pid 79641 is not found 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 79641 is not found' 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:16.793 ************************************ 00:18:16.793 END TEST nvmf_digest 00:18:16.793 ************************************ 00:18:16.793 00:18:16.793 real 0m37.400s 00:18:16.793 user 1m10.647s 00:18:16.793 sys 0m10.070s 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.793 ************************************ 00:18:16.793 START TEST nvmf_host_multipath 00:18:16.793 ************************************ 00:18:16.793 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:17.052 * Looking for test storage... 00:18:17.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.052 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:17.053 Cannot find device "nvmf_tgt_br" 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:17.053 Cannot find device "nvmf_tgt_br2" 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:17.053 Cannot find device "nvmf_tgt_br" 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:17.053 Cannot find device "nvmf_tgt_br2" 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:17.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:17.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:17.053 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:17.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:18:17.312 00:18:17.312 --- 10.0.0.2 ping statistics --- 00:18:17.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.312 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:17.312 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:17.312 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:17.312 00:18:17.312 --- 10.0.0.3 ping statistics --- 00:18:17.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.312 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:17.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:17.312 00:18:17.312 --- 10.0.0.1 ping statistics --- 00:18:17.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.312 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80117 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80117 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80117 ']' 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.312 19:56:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:17.313 [2024-07-24 19:56:45.968485] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:18:17.313 [2024-07-24 19:56:45.968608] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.571 [2024-07-24 19:56:46.109553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:17.571 [2024-07-24 19:56:46.223342] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.571 [2024-07-24 19:56:46.223423] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.571 [2024-07-24 19:56:46.223449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.571 [2024-07-24 19:56:46.223458] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.571 [2024-07-24 19:56:46.223465] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.571 [2024-07-24 19:56:46.223629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.571 [2024-07-24 19:56:46.223640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.829 [2024-07-24 19:56:46.278770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:18.396 19:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.396 19:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:18:18.396 19:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.396 19:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:18.396 19:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:18.396 19:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.396 19:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80117 00:18:18.396 19:56:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:18.655 [2024-07-24 19:56:47.183535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.655 19:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:18.913 Malloc0 00:18:18.913 19:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:19.172 19:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.430 19:56:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.745 [2024-07-24 19:56:48.168036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:19.745 [2024-07-24 19:56:48.392095] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80167 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80167 /var/tmp/bdevperf.sock 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 80167 ']' 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.745 19:56:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:21.141 19:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.141 19:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:18:21.141 19:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:21.400 19:56:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:21.658 Nvme0n1 00:18:21.658 19:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:21.917 Nvme0n1 00:18:21.917 19:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:21.917 19:56:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:22.906 19:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:22.906 19:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:23.164 19:56:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:23.423 19:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:23.423 19:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80117 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:23.423 19:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80218 00:18:23.423 19:56:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.985 Attaching 4 probes... 00:18:29.985 @path[10.0.0.2, 4421]: 17981 00:18:29.985 @path[10.0.0.2, 4421]: 18196 00:18:29.985 @path[10.0.0.2, 4421]: 18182 00:18:29.985 @path[10.0.0.2, 4421]: 18153 00:18:29.985 @path[10.0.0.2, 4421]: 18186 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80218 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:29.985 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:30.244 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:30.244 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80329 00:18:30.244 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:30.244 19:56:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80117 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:36.819 19:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:36.819 19:57:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:36.819 Attaching 4 probes... 00:18:36.819 @path[10.0.0.2, 4420]: 17846 00:18:36.819 @path[10.0.0.2, 4420]: 18081 00:18:36.819 @path[10.0.0.2, 4420]: 18170 00:18:36.819 @path[10.0.0.2, 4420]: 18216 00:18:36.819 @path[10.0.0.2, 4420]: 18246 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80329 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:36.819 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:37.077 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:37.077 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80443 00:18:37.077 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80117 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:37.077 19:57:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.637 Attaching 4 probes... 00:18:43.637 @path[10.0.0.2, 4421]: 13634 00:18:43.637 @path[10.0.0.2, 4421]: 17650 00:18:43.637 @path[10.0.0.2, 4421]: 17803 00:18:43.637 @path[10.0.0.2, 4421]: 17866 00:18:43.637 @path[10.0.0.2, 4421]: 17817 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80443 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:43.637 19:57:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:43.637 19:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:43.894 19:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:43.894 19:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80117 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:43.894 19:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80561 00:18:43.894 19:57:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:50.446 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.447 Attaching 4 probes... 00:18:50.447 00:18:50.447 00:18:50.447 00:18:50.447 00:18:50.447 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80561 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:50.447 19:57:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:50.447 19:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:50.447 19:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80668 00:18:50.447 19:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80117 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:50.447 19:57:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.026 Attaching 4 probes... 00:18:57.026 @path[10.0.0.2, 4421]: 17112 00:18:57.026 @path[10.0.0.2, 4421]: 17412 00:18:57.026 @path[10.0.0.2, 4421]: 16055 00:18:57.026 @path[10.0.0.2, 4421]: 16640 00:18:57.026 @path[10.0.0.2, 4421]: 17169 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80668 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.026 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:57.026 [2024-07-24 19:57:25.685155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799380 is same with the state(5) to be set 00:18:57.026 [2024-07-24 19:57:25.685221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799380 is same with the state(5) to be set 00:18:57.026 [2024-07-24 19:57:25.685233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1799380 is same with the state(5) to be set 00:18:57.283 19:57:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:58.219 19:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:58.219 19:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80796 00:18:58.219 19:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80117 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:58.219 19:57:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:04.782 19:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:04.782 19:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:04.782 19:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:04.782 19:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.782 Attaching 4 probes... 00:19:04.782 @path[10.0.0.2, 4420]: 17206 00:19:04.782 @path[10.0.0.2, 4420]: 17249 00:19:04.782 @path[10.0.0.2, 4420]: 16755 00:19:04.782 @path[10.0.0.2, 4420]: 17191 00:19:04.782 @path[10.0.0.2, 4420]: 17563 00:19:04.782 19:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:04.782 19:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:04.782 19:57:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:04.782 19:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:04.782 19:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:04.782 19:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:04.782 19:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80796 00:19:04.782 19:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.782 19:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:04.782 [2024-07-24 19:57:33.268452] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:04.782 19:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:05.040 19:57:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:11.600 19:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:11.600 19:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80966 00:19:11.600 19:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80117 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:11.600 19:57:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:18.165 Attaching 4 probes... 00:19:18.165 @path[10.0.0.2, 4421]: 17313 00:19:18.165 @path[10.0.0.2, 4421]: 17610 00:19:18.165 @path[10.0.0.2, 4421]: 17428 00:19:18.165 @path[10.0.0.2, 4421]: 17672 00:19:18.165 @path[10.0.0.2, 4421]: 17540 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80966 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80167 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80167 ']' 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80167 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.165 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80167 00:19:18.166 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:18.166 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:18.166 killing process with pid 80167 00:19:18.166 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80167' 00:19:18.166 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80167 00:19:18.166 19:57:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80167 00:19:18.166 Connection closed with partial response: 00:19:18.166 00:19:18.166 00:19:18.166 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80167 00:19:18.166 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:18.166 [2024-07-24 19:56:48.452493] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:19:18.166 [2024-07-24 19:56:48.452606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80167 ] 00:19:18.166 [2024-07-24 19:56:48.592148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.166 [2024-07-24 19:56:48.715762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.166 [2024-07-24 19:56:48.772967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:18.166 Running I/O for 90 seconds... 00:19:18.166 [2024-07-24 19:56:58.708159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.708236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.708328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.708377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.708438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.708485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.708531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.708576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.708622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.708668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.708713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.708806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.708857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.708904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.708948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.708974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.708994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.166 [2024-07-24 19:56:58.709413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.709465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.166 [2024-07-24 19:56:58.709512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:18.166 [2024-07-24 19:56:58.709537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.709973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.709994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.167 [2024-07-24 19:56:58.710932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.710958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.710977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.711003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.711022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.711047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.711066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.711103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.711125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.711151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.167 [2024-07-24 19:56:58.711171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:18.167 [2024-07-24 19:56:58.711196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.168 [2024-07-24 19:56:58.711676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.711730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.711793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.711838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.711884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.711929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.711954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.711974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.168 [2024-07-24 19:56:58.712783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:18.168 [2024-07-24 19:56:58.712817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.712838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.712874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.712895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.712920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.712940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.712966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.712986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.713258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.713303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.713348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.713393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.713449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.713496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.713542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:56:58.713593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.713971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.713998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.714025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.714053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.714072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.714099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.714118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.714144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.714163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.714189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.714209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.714234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.714253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.714280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.714299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:56:58.715494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.169 [2024-07-24 19:56:58.715526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:57:05.253933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:57:05.254006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:57:05.254073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:57:05.254099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:57:05.254127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:57:05.254146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:18.169 [2024-07-24 19:57:05.254172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.169 [2024-07-24 19:57:05.254191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.170 [2024-07-24 19:57:05.254261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.170 [2024-07-24 19:57:05.254310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.170 [2024-07-24 19:57:05.254355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.170 [2024-07-24 19:57:05.254400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.254962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.254981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.255006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.255025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.255051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.255070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.255095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.255114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.255139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.255158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.255184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.255202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.255228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.255247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.255272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.255291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:18.170 [2024-07-24 19:57:05.255317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.170 [2024-07-24 19:57:05.255336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.255381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.255436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.255481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.255525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.255577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.255624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.255678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.255723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.255787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.255833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.255877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.255922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.255947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.255966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.256021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.256066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.256111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.256155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.256199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.256243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.171 [2024-07-24 19:57:05.256287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.256974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.256999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.257018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.257044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.171 [2024-07-24 19:57:05.257062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:18.171 [2024-07-24 19:57:05.257117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.257142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.257199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.257248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.257294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.257339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.257383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.257428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.257471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.257971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.257990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.258036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.258079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.258123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.258168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.258211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.172 [2024-07-24 19:57:05.258868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.172 [2024-07-24 19:57:05.258912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:18.172 [2024-07-24 19:57:05.258938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.258957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.258993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.259013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.259038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.259058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.259083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.259102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.259127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.259146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.259171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.259190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.259903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.259934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.259977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.259998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:05.260642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.260695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.260766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.260826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.260881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.260937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.260972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.260992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.261027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.261058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:05.261096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:05.261116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.304568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:12.304648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.304714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:12.304753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.304786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:12.304806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.304832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:12.304851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.304876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:12.304895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.304920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:12.304939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.304965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:12.304984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.305009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.173 [2024-07-24 19:57:12.305029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.305055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:12.305074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.305100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.173 [2024-07-24 19:57:12.305118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:18.173 [2024-07-24 19:57:12.305151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.305170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.305244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.305288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.305333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.305376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.305420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.305979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.305998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.174 [2024-07-24 19:57:12.306542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:18.174 [2024-07-24 19:57:12.306962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.174 [2024-07-24 19:57:12.306982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.307879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.307969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.307994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.308013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.308057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.308116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.308161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.308205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.175 [2024-07-24 19:57:12.308250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.308294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.308340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.308384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.308431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.175 [2024-07-24 19:57:12.308488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:18.175 [2024-07-24 19:57:12.308515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.308535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.308580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.308625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.308682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.308730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.308791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.308836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.308881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.308929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.308974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.308999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.309509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.309528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.176 [2024-07-24 19:57:12.310378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.310979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.310999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.311032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.311052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.311085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.311105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.311138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.311158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.311191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.311211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.311245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.176 [2024-07-24 19:57:12.311265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:18.176 [2024-07-24 19:57:12.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:12.311371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:12.311438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:12.311493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:12.311546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:12.311601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:12.311654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:12.311707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:12.311781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:12.311803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.685990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.177 [2024-07-24 19:57:25.686859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.686974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.686989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.687002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.687017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.687031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.687046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.687059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.687074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.687087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.687102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.687116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.687131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.177 [2024-07-24 19:57:25.687144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.177 [2024-07-24 19:57:25.687166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.687560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.687982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.687996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.178 [2024-07-24 19:57:25.688267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.178 [2024-07-24 19:57:25.688302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.178 [2024-07-24 19:57:25.688317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.688548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.688581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.688609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.688638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.688672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.688702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.688730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.688777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.688975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.688990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:18.179 [2024-07-24 19:57:25.689241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.689269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.689298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.689326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.689354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.689383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.689411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.179 [2024-07-24 19:57:25.689446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.179 [2024-07-24 19:57:25.689461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fe6a0 is same with the state(5) to be set 00:19:18.179 [2024-07-24 19:57:25.689477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.179 [2024-07-24 19:57:25.689492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74064 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74520 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74528 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74536 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74544 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74552 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74560 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74568 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74576 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74584 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.689965] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.689974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.689984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74592 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.689997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74600 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74608 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74616 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74624 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74632 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74640 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74648 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74656 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.180 [2024-07-24 19:57:25.690409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74664 len:8 PRP1 0x0 PRP2 0x0 00:19:18.180 [2024-07-24 19:57:25.690422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.180 [2024-07-24 19:57:25.690435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:18.180 [2024-07-24 19:57:25.690444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:18.181 [2024-07-24 19:57:25.690454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74672 len:8 PRP1 0x0 PRP2 0x0 00:19:18.181 [2024-07-24 19:57:25.690467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-07-24 19:57:25.690523] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10fe6a0 was disconnected and freed. reset controller. 00:19:18.181 [2024-07-24 19:57:25.691688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.181 [2024-07-24 19:57:25.691780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:18.181 [2024-07-24 19:57:25.691803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:18.181 [2024-07-24 19:57:25.691847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1080100 (9): Bad file descriptor 00:19:18.181 [2024-07-24 19:57:25.692324] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.181 [2024-07-24 19:57:25.692357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1080100 with addr=10.0.0.2, port=4421 00:19:18.181 [2024-07-24 19:57:25.692380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1080100 is same with the state(5) to be set 00:19:18.181 [2024-07-24 19:57:25.692455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1080100 (9): Bad file descriptor 00:19:18.181 [2024-07-24 19:57:25.692503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.181 [2024-07-24 19:57:25.692521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:18.181 [2024-07-24 19:57:25.692556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.181 [2024-07-24 19:57:25.692589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.181 [2024-07-24 19:57:25.692605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:18.181 [2024-07-24 19:57:35.762835] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:18.181 Received shutdown signal, test time was about 55.257497 seconds 00:19:18.181 00:19:18.181 Latency(us) 00:19:18.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.181 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.181 Verification LBA range: start 0x0 length 0x4000 00:19:18.181 Nvme0n1 : 55.26 7436.19 29.05 0.00 0.00 17181.29 1079.85 7046430.72 00:19:18.181 =================================================================================================================== 00:19:18.181 Total : 7436.19 29.05 0.00 0.00 17181.29 1079.85 7046430.72 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.181 rmmod nvme_tcp 00:19:18.181 rmmod nvme_fabrics 00:19:18.181 rmmod nvme_keyring 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80117 ']' 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80117 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 80117 ']' 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 80117 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80117 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:18.181 killing process with pid 80117 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80117' 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 80117 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 80117 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:18.181 00:19:18.181 real 1m1.338s 00:19:18.181 user 2m49.282s 00:19:18.181 sys 0m18.882s 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:18.181 ************************************ 00:19:18.181 END TEST nvmf_host_multipath 00:19:18.181 ************************************ 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.181 ************************************ 00:19:18.181 START TEST nvmf_timeout 00:19:18.181 ************************************ 00:19:18.181 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:18.440 * Looking for test storage... 00:19:18.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.440 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:18.441 Cannot find device "nvmf_tgt_br" 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:18.441 Cannot find device "nvmf_tgt_br2" 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:18.441 Cannot find device "nvmf_tgt_br" 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:18.441 Cannot find device "nvmf_tgt_br2" 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:18.441 19:57:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:18.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:18.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:18.441 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:18.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:19:18.700 00:19:18.700 --- 10.0.0.2 ping statistics --- 00:19:18.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.700 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:18.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:18.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:19:18.700 00:19:18.700 --- 10.0.0.3 ping statistics --- 00:19:18.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.700 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:18.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:18.700 00:19:18.700 --- 10.0.0.1 ping statistics --- 00:19:18.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.700 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=81279 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 81279 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81279 ']' 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.700 19:57:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:18.700 [2024-07-24 19:57:47.347452] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:19:18.700 [2024-07-24 19:57:47.347572] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.002 [2024-07-24 19:57:47.485372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:19.002 [2024-07-24 19:57:47.597071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.002 [2024-07-24 19:57:47.597127] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.002 [2024-07-24 19:57:47.597139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.002 [2024-07-24 19:57:47.597148] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.002 [2024-07-24 19:57:47.597155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.002 [2024-07-24 19:57:47.597307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.002 [2024-07-24 19:57:47.597317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.300 [2024-07-24 19:57:47.648988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:19.866 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.866 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:19.866 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.866 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:19.866 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:19.866 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.866 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.866 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:20.123 [2024-07-24 19:57:48.690952] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.123 19:57:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:20.380 Malloc0 00:19:20.638 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.896 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:20.896 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.465 [2024-07-24 19:57:49.834834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81335 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81335 /var/tmp/bdevperf.sock 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81335 ']' 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.465 19:57:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:21.465 [2024-07-24 19:57:49.933488] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:19:21.465 [2024-07-24 19:57:49.933973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81335 ] 00:19:21.465 [2024-07-24 19:57:50.084592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.723 [2024-07-24 19:57:50.213652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.723 [2024-07-24 19:57:50.270139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:22.290 19:57:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.290 19:57:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:22.290 19:57:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:22.548 19:57:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:22.874 NVMe0n1 00:19:22.874 19:57:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81353 00:19:22.874 19:57:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.874 19:57:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:23.132 Running I/O for 10 seconds... 00:19:24.067 19:57:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.328 [2024-07-24 19:57:52.769662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.769998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.328 [2024-07-24 19:57:52.770846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.328 [2024-07-24 19:57:52.770856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.770867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.770877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.770889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.770899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.770911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.770920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.770931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.770948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.770959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.770968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.770980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.770989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.329 [2024-07-24 19:57:52.771528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.329 [2024-07-24 19:57:52.771550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.329 [2024-07-24 19:57:52.771571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.329 [2024-07-24 19:57:52.771592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.329 [2024-07-24 19:57:52.771615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.329 [2024-07-24 19:57:52.771636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.329 [2024-07-24 19:57:52.771656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.329 [2024-07-24 19:57:52.771677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.329 [2024-07-24 19:57:52.771688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.330 [2024-07-24 19:57:52.771881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.330 [2024-07-24 19:57:52.771902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.771983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.771994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:24.330 [2024-07-24 19:57:52.772086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.772310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.772319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.330 [2024-07-24 19:57:52.773290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.330 [2024-07-24 19:57:52.773302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.331 [2024-07-24 19:57:52.773685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17221b0 is same with the state(5) to be set 00:19:24.331 [2024-07-24 19:57:52.773709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:24.331 [2024-07-24 19:57:52.773717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:24.331 [2024-07-24 19:57:52.773726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66192 len:8 PRP1 0x0 PRP2 0x0 00:19:24.331 [2024-07-24 19:57:52.773735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:24.331 [2024-07-24 19:57:52.773802] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17221b0 was disconnected and freed. reset controller. 00:19:24.331 [2024-07-24 19:57:52.774060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:24.331 [2024-07-24 19:57:52.774143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1d40 (9): Bad file descriptor 00:19:24.331 [2024-07-24 19:57:52.774258] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.331 [2024-07-24 19:57:52.774280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b1d40 with addr=10.0.0.2, port=4420 00:19:24.331 [2024-07-24 19:57:52.774291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1d40 is same with the state(5) to be set 00:19:24.331 [2024-07-24 19:57:52.774309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1d40 (9): Bad file descriptor 00:19:24.331 [2024-07-24 19:57:52.774326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.331 [2024-07-24 19:57:52.774336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:24.331 [2024-07-24 19:57:52.774346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.331 [2024-07-24 19:57:52.774366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:24.331 [2024-07-24 19:57:52.774378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:24.331 19:57:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:26.233 [2024-07-24 19:57:54.774658] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.233 [2024-07-24 19:57:54.775157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b1d40 with addr=10.0.0.2, port=4420 00:19:26.233 [2024-07-24 19:57:54.775556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1d40 is same with the state(5) to be set 00:19:26.233 [2024-07-24 19:57:54.775963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1d40 (9): Bad file descriptor 00:19:26.233 [2024-07-24 19:57:54.776376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.233 [2024-07-24 19:57:54.776775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.233 [2024-07-24 19:57:54.776993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.233 [2024-07-24 19:57:54.777031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.233 [2024-07-24 19:57:54.777045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.233 19:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:26.233 19:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:26.233 19:57:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:26.491 19:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:26.491 19:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:26.491 19:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:26.491 19:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:26.749 19:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:26.749 19:57:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:28.124 [2024-07-24 19:57:56.777230] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:28.124 [2024-07-24 19:57:56.777308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16b1d40 with addr=10.0.0.2, port=4420 00:19:28.124 [2024-07-24 19:57:56.777326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1d40 is same with the state(5) to be set 00:19:28.124 [2024-07-24 19:57:56.777366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b1d40 (9): Bad file descriptor 00:19:28.124 [2024-07-24 19:57:56.777386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:28.124 [2024-07-24 19:57:56.777397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:28.124 [2024-07-24 19:57:56.777408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.124 [2024-07-24 19:57:56.777436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:28.124 [2024-07-24 19:57:56.777450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:30.667 [2024-07-24 19:57:58.777506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:30.667 [2024-07-24 19:57:58.777592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:30.667 [2024-07-24 19:57:58.777605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:30.667 [2024-07-24 19:57:58.777616] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:30.667 [2024-07-24 19:57:58.777645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:31.234 00:19:31.234 Latency(us) 00:19:31.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.234 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.234 Verification LBA range: start 0x0 length 0x4000 00:19:31.234 NVMe0n1 : 8.19 1003.10 3.92 15.63 0.00 125440.64 4021.53 7015926.69 00:19:31.234 =================================================================================================================== 00:19:31.234 Total : 1003.10 3.92 15.63 0.00 125440.64 4021.53 7015926.69 00:19:31.234 0 00:19:31.801 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:31.801 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:31.801 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:32.059 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:32.059 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:32.059 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:32.059 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81353 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81335 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81335 ']' 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81335 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81335 00:19:32.319 killing process with pid 81335 00:19:32.319 Received shutdown signal, test time was about 9.309666 seconds 00:19:32.319 00:19:32.319 Latency(us) 00:19:32.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.319 =================================================================================================================== 00:19:32.319 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81335' 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81335 00:19:32.319 19:58:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81335 00:19:32.577 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.834 [2024-07-24 19:58:01.339045] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81475 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81475 /var/tmp/bdevperf.sock 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81475 ']' 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.834 19:58:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:32.834 [2024-07-24 19:58:01.414522] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:19:32.834 [2024-07-24 19:58:01.414901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81475 ] 00:19:33.091 [2024-07-24 19:58:01.550163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.091 [2024-07-24 19:58:01.657774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.091 [2024-07-24 19:58:01.709889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:34.026 19:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.026 19:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:34.026 19:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:34.026 19:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:34.284 NVMe0n1 00:19:34.543 19:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81493 00:19:34.543 19:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:34.543 19:58:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.543 Running I/O for 10 seconds... 00:19:35.479 19:58:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.740 [2024-07-24 19:58:04.194508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.195202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.195601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.195990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.196359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.196751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.740 [2024-07-24 19:58:04.197595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.740 [2024-07-24 19:58:04.197620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.740 [2024-07-24 19:58:04.197642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.740 [2024-07-24 19:58:04.197671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.740 [2024-07-24 19:58:04.197692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.740 [2024-07-24 19:58:04.197713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.740 [2024-07-24 19:58:04.197734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.740 [2024-07-24 19:58:04.197759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.740 [2024-07-24 19:58:04.197770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.197791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.197981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.197993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.198147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.198169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.198189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.198210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.198232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.198253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.198273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.741 [2024-07-24 19:58:04.198294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.741 [2024-07-24 19:58:04.198585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.741 [2024-07-24 19:58:04.198596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.198606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.198626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.198648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.198669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.198690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.198711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.198981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.198993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.199011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.199033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.199055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.199077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.742 [2024-07-24 19:58:04.199382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.199403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.199428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.742 [2024-07-24 19:58:04.199449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.742 [2024-07-24 19:58:04.199460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.743 [2024-07-24 19:58:04.199469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.199480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.743 [2024-07-24 19:58:04.199489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.199508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.743 [2024-07-24 19:58:04.199518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.199530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.743 [2024-07-24 19:58:04.199539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.199550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf51b0 is same with the state(5) to be set 00:19:35.743 [2024-07-24 19:58:04.199564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.199572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.199581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67384 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.199590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.199601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.199609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.199617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67984 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.199626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.199635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.199642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.199651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67992 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.199668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.199677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.199684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.199698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68000 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.199707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68008 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68016 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68024 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68032 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68040 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68048 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68056 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200829] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68064 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68072 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68080 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68088 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.200969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.200979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.200986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.200994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67392 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.201003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.201013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.201020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.201028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67400 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.201038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.201047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.201054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.201062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67408 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.201070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.201342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.201361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.201438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67416 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.201454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.201464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.201472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.201487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67424 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.201496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.743 [2024-07-24 19:58:04.201506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.743 [2024-07-24 19:58:04.201514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.743 [2024-07-24 19:58:04.201523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67432 len:8 PRP1 0x0 PRP2 0x0 00:19:35.743 [2024-07-24 19:58:04.201532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.744 [2024-07-24 19:58:04.201542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.744 [2024-07-24 19:58:04.201549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.744 [2024-07-24 19:58:04.201557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67440 len:8 PRP1 0x0 PRP2 0x0 00:19:35.744 [2024-07-24 19:58:04.201567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.744 [2024-07-24 19:58:04.201577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.744 [2024-07-24 19:58:04.201584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.744 [2024-07-24 19:58:04.201592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67448 len:8 PRP1 0x0 PRP2 0x0 00:19:35.744 [2024-07-24 19:58:04.201601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.744 [2024-07-24 19:58:04.201659] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaf51b0 was disconnected and freed. reset controller. 00:19:35.744 [2024-07-24 19:58:04.201787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.744 [2024-07-24 19:58:04.201806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.744 [2024-07-24 19:58:04.201824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.744 [2024-07-24 19:58:04.201834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.744 [2024-07-24 19:58:04.201844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.744 [2024-07-24 19:58:04.201853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.744 [2024-07-24 19:58:04.201863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.744 [2024-07-24 19:58:04.201873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.744 [2024-07-24 19:58:04.201881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84d40 is same with the state(5) to be set 00:19:35.744 [2024-07-24 19:58:04.202320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.744 [2024-07-24 19:58:04.202356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84d40 (9): Bad file descriptor 00:19:35.744 [2024-07-24 19:58:04.202463] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.744 [2024-07-24 19:58:04.202486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84d40 with addr=10.0.0.2, port=4420 00:19:35.744 [2024-07-24 19:58:04.202498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84d40 is same with the state(5) to be set 00:19:35.744 [2024-07-24 19:58:04.202516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84d40 (9): Bad file descriptor 00:19:35.744 [2024-07-24 19:58:04.202532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.744 [2024-07-24 19:58:04.202541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.744 [2024-07-24 19:58:04.202552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.744 [2024-07-24 19:58:04.202573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.744 [2024-07-24 19:58:04.202590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.744 19:58:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:36.681 [2024-07-24 19:58:05.202782] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.681 [2024-07-24 19:58:05.203344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84d40 with addr=10.0.0.2, port=4420 00:19:36.681 [2024-07-24 19:58:05.203799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84d40 is same with the state(5) to be set 00:19:36.681 [2024-07-24 19:58:05.204208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84d40 (9): Bad file descriptor 00:19:36.681 [2024-07-24 19:58:05.204624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:36.681 [2024-07-24 19:58:05.205028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:36.681 [2024-07-24 19:58:05.205407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:36.681 [2024-07-24 19:58:05.205642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:36.681 [2024-07-24 19:58:05.205873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:36.681 19:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.940 [2024-07-24 19:58:05.484416] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.940 19:58:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81493 00:19:37.882 [2024-07-24 19:58:06.223434] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:44.445 00:19:44.445 Latency(us) 00:19:44.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.445 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:44.445 Verification LBA range: start 0x0 length 0x4000 00:19:44.445 NVMe0n1 : 10.01 6264.96 24.47 0.00 0.00 20382.79 1355.40 3019898.88 00:19:44.445 =================================================================================================================== 00:19:44.445 Total : 6264.96 24.47 0.00 0.00 20382.79 1355.40 3019898.88 00:19:44.445 0 00:19:44.445 19:58:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81602 00:19:44.445 19:58:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.445 19:58:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:44.704 Running I/O for 10 seconds... 00:19:45.636 19:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:45.896 [2024-07-24 19:58:14.361968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.896 [2024-07-24 19:58:14.362045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.896 [2024-07-24 19:58:14.362311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.896 [2024-07-24 19:58:14.362323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.362972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.362996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.897 [2024-07-24 19:58:14.363198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.897 [2024-07-24 19:58:14.363208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.363981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.363991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.364004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.364013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.364025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.364035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.364046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.898 [2024-07-24 19:58:14.364055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.898 [2024-07-24 19:58:14.364066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:45.899 [2024-07-24 19:58:14.364656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:45.899 [2024-07-24 19:58:14.364848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafa5b0 is same with the state(5) to be set 00:19:45.899 [2024-07-24 19:58:14.364871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:45.899 [2024-07-24 19:58:14.364879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:45.899 [2024-07-24 19:58:14.364888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:19:45.899 [2024-07-24 19:58:14.364897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.364950] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xafa5b0 was disconnected and freed. reset controller. 00:19:45.899 [2024-07-24 19:58:14.365022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.899 [2024-07-24 19:58:14.365038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.899 [2024-07-24 19:58:14.365050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.900 [2024-07-24 19:58:14.365065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.900 [2024-07-24 19:58:14.365076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.900 [2024-07-24 19:58:14.365085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.900 [2024-07-24 19:58:14.365095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:45.900 [2024-07-24 19:58:14.365104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:45.900 [2024-07-24 19:58:14.365114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84d40 is same with the state(5) to be set 00:19:45.900 [2024-07-24 19:58:14.365329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.900 [2024-07-24 19:58:14.365351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84d40 (9): Bad file descriptor 00:19:45.900 [2024-07-24 19:58:14.365445] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.900 [2024-07-24 19:58:14.365466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84d40 with addr=10.0.0.2, port=4420 00:19:45.900 [2024-07-24 19:58:14.365478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84d40 is same with the state(5) to be set 00:19:45.900 [2024-07-24 19:58:14.365495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84d40 (9): Bad file descriptor 00:19:45.900 [2024-07-24 19:58:14.365511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.900 [2024-07-24 19:58:14.365522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:45.900 [2024-07-24 19:58:14.365533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.900 [2024-07-24 19:58:14.365555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.900 [2024-07-24 19:58:14.365566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:45.900 19:58:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:46.833 [2024-07-24 19:58:15.365737] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.833 [2024-07-24 19:58:15.366254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84d40 with addr=10.0.0.2, port=4420 00:19:46.833 [2024-07-24 19:58:15.366672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84d40 is same with the state(5) to be set 00:19:46.833 [2024-07-24 19:58:15.367085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84d40 (9): Bad file descriptor 00:19:46.833 [2024-07-24 19:58:15.367494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:46.833 [2024-07-24 19:58:15.367893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:46.833 [2024-07-24 19:58:15.368271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:46.833 [2024-07-24 19:58:15.368493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:46.833 [2024-07-24 19:58:15.368701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.765 [2024-07-24 19:58:16.369249] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:47.765 [2024-07-24 19:58:16.369682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84d40 with addr=10.0.0.2, port=4420 00:19:47.765 [2024-07-24 19:58:16.370075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84d40 is same with the state(5) to be set 00:19:47.765 [2024-07-24 19:58:16.370463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84d40 (9): Bad file descriptor 00:19:47.765 [2024-07-24 19:58:16.370862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:47.765 [2024-07-24 19:58:16.371246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:47.765 [2024-07-24 19:58:16.371453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.765 [2024-07-24 19:58:16.371490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:47.765 [2024-07-24 19:58:16.371503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:49.134 [2024-07-24 19:58:17.372014] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:49.134 [2024-07-24 19:58:17.372420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa84d40 with addr=10.0.0.2, port=4420 00:19:49.134 [2024-07-24 19:58:17.372445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa84d40 is same with the state(5) to be set 00:19:49.134 [2024-07-24 19:58:17.372705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa84d40 (9): Bad file descriptor 00:19:49.134 [2024-07-24 19:58:17.372978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:49.134 [2024-07-24 19:58:17.372994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:49.134 [2024-07-24 19:58:17.373007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:49.134 [2024-07-24 19:58:17.377045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:49.134 [2024-07-24 19:58:17.377082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:49.134 19:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.134 [2024-07-24 19:58:17.610687] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.134 19:58:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81602 00:19:50.067 [2024-07-24 19:58:18.411056] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:55.329 00:19:55.329 Latency(us) 00:19:55.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.329 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:55.329 Verification LBA range: start 0x0 length 0x4000 00:19:55.329 NVMe0n1 : 10.01 5288.81 20.66 3574.76 0.00 14412.09 726.11 3019898.88 00:19:55.329 =================================================================================================================== 00:19:55.329 Total : 5288.81 20.66 3574.76 0.00 14412.09 0.00 3019898.88 00:19:55.329 0 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81475 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81475 ']' 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81475 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81475 00:19:55.329 killing process with pid 81475 00:19:55.329 Received shutdown signal, test time was about 10.000000 seconds 00:19:55.329 00:19:55.329 Latency(us) 00:19:55.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.329 =================================================================================================================== 00:19:55.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81475' 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81475 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81475 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=81718 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 81718 /var/tmp/bdevperf.sock 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 81718 ']' 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.329 19:58:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:55.329 [2024-07-24 19:58:23.560231] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:19:55.329 [2024-07-24 19:58:23.560564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81718 ] 00:19:55.329 [2024-07-24 19:58:23.698945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.329 [2024-07-24 19:58:23.809866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.329 [2024-07-24 19:58:23.863219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:56.263 19:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.263 19:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:19:56.263 19:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=81734 00:19:56.263 19:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81718 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:56.263 19:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:56.264 19:58:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:56.522 NVMe0n1 00:19:56.522 19:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=81770 00:19:56.522 19:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.522 19:58:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:56.781 Running I/O for 10 seconds... 00:19:57.715 19:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.976 [2024-07-24 19:58:26.446191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.976 [2024-07-24 19:58:26.446719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.976 [2024-07-24 19:58:26.446730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.446986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.446998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.447978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.447988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.448000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.977 [2024-07-24 19:58:26.448010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.977 [2024-07-24 19:58:26.448022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.978 [2024-07-24 19:58:26.448769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.978 [2024-07-24 19:58:26.448781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.448982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.448993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:31664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.979 [2024-07-24 19:58:26.449544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.979 [2024-07-24 19:58:26.449556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.980 [2024-07-24 19:58:26.449566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.980 [2024-07-24 19:58:26.449590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.980 [2024-07-24 19:58:26.449613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.980 [2024-07-24 19:58:26.449635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.980 [2024-07-24 19:58:26.449656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.980 [2024-07-24 19:58:26.449678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.980 [2024-07-24 19:58:26.449700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5996a0 is same with the state(5) to be set 00:19:57.980 [2024-07-24 19:58:26.449726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.980 [2024-07-24 19:58:26.449735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.980 [2024-07-24 19:58:26.449755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48784 len:8 PRP1 0x0 PRP2 0x0 00:19:57.980 [2024-07-24 19:58:26.449766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449822] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5996a0 was disconnected and freed. reset controller. 00:19:57.980 [2024-07-24 19:58:26.449912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.980 [2024-07-24 19:58:26.449929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.980 [2024-07-24 19:58:26.449956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.980 [2024-07-24 19:58:26.449977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.449988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.980 [2024-07-24 19:58:26.449998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.980 [2024-07-24 19:58:26.450007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548c00 is same with the state(5) to be set 00:19:57.980 [2024-07-24 19:58:26.450260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.980 [2024-07-24 19:58:26.450286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548c00 (9): Bad file descriptor 00:19:57.980 [2024-07-24 19:58:26.450403] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.980 [2024-07-24 19:58:26.450425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548c00 with addr=10.0.0.2, port=4420 00:19:57.980 [2024-07-24 19:58:26.450436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548c00 is same with the state(5) to be set 00:19:57.980 [2024-07-24 19:58:26.450456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548c00 (9): Bad file descriptor 00:19:57.980 [2024-07-24 19:58:26.450472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.980 [2024-07-24 19:58:26.450483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.980 [2024-07-24 19:58:26.450493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.980 [2024-07-24 19:58:26.450514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.980 [2024-07-24 19:58:26.450526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.980 19:58:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 81770 00:19:59.913 [2024-07-24 19:58:28.450844] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:59.913 [2024-07-24 19:58:28.450919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548c00 with addr=10.0.0.2, port=4420 00:19:59.913 [2024-07-24 19:58:28.450937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548c00 is same with the state(5) to be set 00:19:59.913 [2024-07-24 19:58:28.450967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548c00 (9): Bad file descriptor 00:19:59.913 [2024-07-24 19:58:28.450986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:59.913 [2024-07-24 19:58:28.450998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:59.913 [2024-07-24 19:58:28.451010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:59.913 [2024-07-24 19:58:28.451038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:59.913 [2024-07-24 19:58:28.451060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:01.831 [2024-07-24 19:58:30.451281] uring.c: 632:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.831 [2024-07-24 19:58:30.451352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x548c00 with addr=10.0.0.2, port=4420 00:20:01.831 [2024-07-24 19:58:30.451370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x548c00 is same with the state(5) to be set 00:20:01.831 [2024-07-24 19:58:30.451398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x548c00 (9): Bad file descriptor 00:20:01.831 [2024-07-24 19:58:30.451417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:01.831 [2024-07-24 19:58:30.451428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:01.831 [2024-07-24 19:58:30.451441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:01.831 [2024-07-24 19:58:30.451469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.831 [2024-07-24 19:58:30.451482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:04.366 [2024-07-24 19:58:32.451651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:04.366 [2024-07-24 19:58:32.451724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:04.366 [2024-07-24 19:58:32.451750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:04.366 [2024-07-24 19:58:32.451763] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:04.366 [2024-07-24 19:58:32.451797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:04.933 00:20:04.933 Latency(us) 00:20:04.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.933 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:04.933 NVMe0n1 : 8.14 2120.70 8.28 15.72 0.00 59857.63 7923.90 7015926.69 00:20:04.933 =================================================================================================================== 00:20:04.933 Total : 2120.70 8.28 15.72 0.00 59857.63 7923.90 7015926.69 00:20:04.933 0 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:04.933 Attaching 5 probes... 00:20:04.933 1331.686452: reset bdev controller NVMe0 00:20:04.933 1331.774121: reconnect bdev controller NVMe0 00:20:04.933 3332.119702: reconnect delay bdev controller NVMe0 00:20:04.933 3332.144684: reconnect bdev controller NVMe0 00:20:04.933 5332.582851: reconnect delay bdev controller NVMe0 00:20:04.933 5332.605284: reconnect bdev controller NVMe0 00:20:04.933 7333.056892: reconnect delay bdev controller NVMe0 00:20:04.933 7333.081699: reconnect bdev controller NVMe0 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 81734 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 81718 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81718 ']' 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81718 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81718 00:20:04.933 killing process with pid 81718 00:20:04.933 Received shutdown signal, test time was about 8.203344 seconds 00:20:04.933 00:20:04.933 Latency(us) 00:20:04.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.933 =================================================================================================================== 00:20:04.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81718' 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81718 00:20:04.933 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81718 00:20:05.191 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.450 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:05.450 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:05.450 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.450 19:58:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.450 rmmod nvme_tcp 00:20:05.450 rmmod nvme_fabrics 00:20:05.450 rmmod nvme_keyring 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 81279 ']' 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 81279 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 81279 ']' 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 81279 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81279 00:20:05.450 killing process with pid 81279 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81279' 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 81279 00:20:05.450 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 81279 00:20:05.708 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.708 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.708 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.708 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.708 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.708 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.708 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.708 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.966 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:05.966 ************************************ 00:20:05.966 END TEST nvmf_timeout 00:20:05.966 ************************************ 00:20:05.966 00:20:05.966 real 0m47.577s 00:20:05.966 user 2m19.456s 00:20:05.966 sys 0m5.975s 00:20:05.966 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.966 19:58:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:05.966 19:58:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:05.966 19:58:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:05.966 ************************************ 00:20:05.966 END TEST nvmf_host 00:20:05.966 ************************************ 00:20:05.966 00:20:05.966 real 5m6.081s 00:20:05.966 user 13m22.335s 00:20:05.966 sys 1m8.814s 00:20:05.966 19:58:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.967 19:58:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.967 ************************************ 00:20:05.967 END TEST nvmf_tcp 00:20:05.967 ************************************ 00:20:05.967 00:20:05.967 real 12m15.832s 00:20:05.967 user 29m52.926s 00:20:05.967 sys 3m2.755s 00:20:05.967 19:58:34 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.967 19:58:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:05.967 19:58:34 -- spdk/autotest.sh@292 -- # [[ 1 -eq 0 ]] 00:20:05.967 19:58:34 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:05.967 19:58:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:05.967 19:58:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:05.967 19:58:34 -- common/autotest_common.sh@10 -- # set +x 00:20:05.967 ************************************ 00:20:05.967 START TEST nvmf_dif 00:20:05.967 ************************************ 00:20:05.967 19:58:34 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:05.967 * Looking for test storage... 00:20:05.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:05.967 19:58:34 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:05.967 19:58:34 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.967 19:58:34 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.967 19:58:34 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.967 19:58:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.967 19:58:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.967 19:58:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.967 19:58:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:05.967 19:58:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.967 19:58:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:05.967 19:58:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:05.967 19:58:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:05.967 19:58:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:05.967 19:58:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.967 19:58:34 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.967 19:58:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:05.967 19:58:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.225 19:58:34 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:06.226 Cannot find device "nvmf_tgt_br" 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.226 Cannot find device "nvmf_tgt_br2" 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:06.226 Cannot find device "nvmf_tgt_br" 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:06.226 Cannot find device "nvmf_tgt_br2" 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:06.226 19:58:34 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:06.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:20:06.483 00:20:06.483 --- 10.0.0.2 ping statistics --- 00:20:06.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.483 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:06.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:20:06.483 00:20:06.483 --- 10.0.0.3 ping statistics --- 00:20:06.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.483 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:06.483 00:20:06.483 --- 10.0.0.1 ping statistics --- 00:20:06.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.483 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:06.483 19:58:34 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:06.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:06.740 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:06.740 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.740 19:58:35 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:06.740 19:58:35 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.740 19:58:35 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.740 19:58:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:06.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=82209 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.740 19:58:35 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 82209 00:20:06.740 19:58:35 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 82209 ']' 00:20:06.740 19:58:35 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.740 19:58:35 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.740 19:58:35 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.740 19:58:35 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.740 19:58:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:06.998 [2024-07-24 19:58:35.433799] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:20:06.998 [2024-07-24 19:58:35.434055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.998 [2024-07-24 19:58:35.570766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.347 [2024-07-24 19:58:35.687449] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.347 [2024-07-24 19:58:35.687747] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.347 [2024-07-24 19:58:35.687888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.347 [2024-07-24 19:58:35.688092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.347 [2024-07-24 19:58:35.688206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.347 [2024-07-24 19:58:35.688329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.347 [2024-07-24 19:58:35.743748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:20:07.924 19:58:36 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:07.924 19:58:36 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.924 19:58:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:07.924 19:58:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:07.924 [2024-07-24 19:58:36.482094] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.924 19:58:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.924 19:58:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:07.924 ************************************ 00:20:07.924 START TEST fio_dif_1_default 00:20:07.924 ************************************ 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:07.924 bdev_null0 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:07.924 [2024-07-24 19:58:36.530232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:07.924 19:58:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:07.924 { 00:20:07.924 "params": { 00:20:07.924 "name": "Nvme$subsystem", 00:20:07.924 "trtype": "$TEST_TRANSPORT", 00:20:07.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.924 "adrfam": "ipv4", 00:20:07.924 "trsvcid": "$NVMF_PORT", 00:20:07.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.924 "hdgst": ${hdgst:-false}, 00:20:07.924 "ddgst": ${ddgst:-false} 00:20:07.924 }, 00:20:07.924 "method": "bdev_nvme_attach_controller" 00:20:07.924 } 00:20:07.924 EOF 00:20:07.924 )") 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:07.925 "params": { 00:20:07.925 "name": "Nvme0", 00:20:07.925 "trtype": "tcp", 00:20:07.925 "traddr": "10.0.0.2", 00:20:07.925 "adrfam": "ipv4", 00:20:07.925 "trsvcid": "4420", 00:20:07.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:07.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:07.925 "hdgst": false, 00:20:07.925 "ddgst": false 00:20:07.925 }, 00:20:07.925 "method": "bdev_nvme_attach_controller" 00:20:07.925 }' 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:07.925 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:08.184 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:08.184 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:08.184 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:08.184 19:58:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:08.184 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:08.184 fio-3.35 00:20:08.184 Starting 1 thread 00:20:20.388 00:20:20.389 filename0: (groupid=0, jobs=1): err= 0: pid=82277: Wed Jul 24 19:58:47 2024 00:20:20.389 read: IOPS=8708, BW=34.0MiB/s (35.7MB/s)(340MiB/10001msec) 00:20:20.389 slat (usec): min=6, max=348, avg= 8.64, stdev= 3.72 00:20:20.389 clat (usec): min=365, max=4287, avg=434.03, stdev=35.89 00:20:20.389 lat (usec): min=371, max=4347, avg=442.67, stdev=36.47 00:20:20.389 clat percentiles (usec): 00:20:20.389 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:20:20.389 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 437], 00:20:20.389 | 70.00th=[ 445], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 469], 00:20:20.389 | 99.00th=[ 494], 99.50th=[ 519], 99.90th=[ 660], 99.95th=[ 750], 00:20:20.389 | 99.99th=[ 1012] 00:20:20.389 bw ( KiB/s): min=33852, max=35520, per=100.00%, avg=34854.53, stdev=419.88, samples=19 00:20:20.389 iops : min= 8463, max= 8880, avg=8713.63, stdev=104.97, samples=19 00:20:20.389 lat (usec) : 500=99.24%, 750=0.71%, 1000=0.04% 00:20:20.389 lat (msec) : 2=0.01%, 10=0.01% 00:20:20.389 cpu : usr=85.32%, sys=12.60%, ctx=101, majf=0, minf=0 00:20:20.389 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:20.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.389 issued rwts: total=87096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.389 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:20.389 00:20:20.389 Run status group 0 (all jobs): 00:20:20.389 READ: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=340MiB (357MB), run=10001-10001msec 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 ************************************ 00:20:20.389 END TEST fio_dif_1_default 00:20:20.389 ************************************ 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 00:20:20.389 real 0m10.970s 00:20:20.389 user 0m9.131s 00:20:20.389 sys 0m1.534s 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 19:58:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:20.389 19:58:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:20.389 19:58:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 ************************************ 00:20:20.389 START TEST fio_dif_1_multi_subsystems 00:20:20.389 ************************************ 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 bdev_null0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 [2024-07-24 19:58:47.554874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 bdev_null1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:20.389 { 00:20:20.389 "params": { 00:20:20.389 "name": "Nvme$subsystem", 00:20:20.389 "trtype": "$TEST_TRANSPORT", 00:20:20.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.389 "adrfam": "ipv4", 00:20:20.389 "trsvcid": "$NVMF_PORT", 00:20:20.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.389 "hdgst": ${hdgst:-false}, 00:20:20.389 "ddgst": ${ddgst:-false} 00:20:20.389 }, 00:20:20.389 "method": "bdev_nvme_attach_controller" 00:20:20.389 } 00:20:20.389 EOF 00:20:20.389 )") 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.389 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:20.390 { 00:20:20.390 "params": { 00:20:20.390 "name": "Nvme$subsystem", 00:20:20.390 "trtype": "$TEST_TRANSPORT", 00:20:20.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.390 "adrfam": "ipv4", 00:20:20.390 "trsvcid": "$NVMF_PORT", 00:20:20.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.390 "hdgst": ${hdgst:-false}, 00:20:20.390 "ddgst": ${ddgst:-false} 00:20:20.390 }, 00:20:20.390 "method": "bdev_nvme_attach_controller" 00:20:20.390 } 00:20:20.390 EOF 00:20:20.390 )") 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:20.390 "params": { 00:20:20.390 "name": "Nvme0", 00:20:20.390 "trtype": "tcp", 00:20:20.390 "traddr": "10.0.0.2", 00:20:20.390 "adrfam": "ipv4", 00:20:20.390 "trsvcid": "4420", 00:20:20.390 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.390 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:20.390 "hdgst": false, 00:20:20.390 "ddgst": false 00:20:20.390 }, 00:20:20.390 "method": "bdev_nvme_attach_controller" 00:20:20.390 },{ 00:20:20.390 "params": { 00:20:20.390 "name": "Nvme1", 00:20:20.390 "trtype": "tcp", 00:20:20.390 "traddr": "10.0.0.2", 00:20:20.390 "adrfam": "ipv4", 00:20:20.390 "trsvcid": "4420", 00:20:20.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.390 "hdgst": false, 00:20:20.390 "ddgst": false 00:20:20.390 }, 00:20:20.390 "method": "bdev_nvme_attach_controller" 00:20:20.390 }' 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:20.390 19:58:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:20.390 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:20.390 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:20.390 fio-3.35 00:20:20.390 Starting 2 threads 00:20:30.412 00:20:30.412 filename0: (groupid=0, jobs=1): err= 0: pid=82436: Wed Jul 24 19:58:58 2024 00:20:30.412 read: IOPS=4615, BW=18.0MiB/s (18.9MB/s)(180MiB/10001msec) 00:20:30.412 slat (nsec): min=6706, max=81953, avg=14552.25, stdev=4947.26 00:20:30.412 clat (usec): min=631, max=2892, avg=826.41, stdev=51.54 00:20:30.412 lat (usec): min=641, max=2909, avg=840.96, stdev=53.11 00:20:30.412 clat percentiles (usec): 00:20:30.412 | 1.00th=[ 725], 5.00th=[ 758], 10.00th=[ 775], 20.00th=[ 791], 00:20:30.412 | 30.00th=[ 799], 40.00th=[ 816], 50.00th=[ 824], 60.00th=[ 832], 00:20:30.412 | 70.00th=[ 848], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 906], 00:20:30.412 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 1020], 99.95th=[ 1336], 00:20:30.412 | 99.99th=[ 2073] 00:20:30.412 bw ( KiB/s): min=17280, max=19296, per=50.29%, avg=18570.11, stdev=449.78, samples=19 00:20:30.412 iops : min= 4320, max= 4824, avg=4642.53, stdev=112.45, samples=19 00:20:30.412 lat (usec) : 750=3.69%, 1000=96.12% 00:20:30.412 lat (msec) : 2=0.18%, 4=0.02% 00:20:30.412 cpu : usr=90.14%, sys=8.43%, ctx=25, majf=0, minf=0 00:20:30.412 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.413 issued rwts: total=46160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:30.413 filename1: (groupid=0, jobs=1): err= 0: pid=82437: Wed Jul 24 19:58:58 2024 00:20:30.413 read: IOPS=4615, BW=18.0MiB/s (18.9MB/s)(180MiB/10001msec) 00:20:30.413 slat (nsec): min=7061, max=79879, avg=14056.29, stdev=4718.33 00:20:30.413 clat (usec): min=461, max=2903, avg=828.95, stdev=56.00 00:20:30.413 lat (usec): min=469, max=2914, avg=843.00, stdev=57.46 00:20:30.413 clat percentiles (usec): 00:20:30.413 | 1.00th=[ 709], 5.00th=[ 742], 10.00th=[ 766], 20.00th=[ 791], 00:20:30.413 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[ 832], 60.00th=[ 840], 00:20:30.413 | 70.00th=[ 848], 80.00th=[ 865], 90.00th=[ 889], 95.00th=[ 914], 00:20:30.413 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1352], 00:20:30.413 | 99.99th=[ 2040] 00:20:30.413 bw ( KiB/s): min=17280, max=19296, per=50.29%, avg=18571.79, stdev=448.15, samples=19 00:20:30.413 iops : min= 4320, max= 4824, avg=4642.95, stdev=112.04, samples=19 00:20:30.413 lat (usec) : 500=0.01%, 750=5.94%, 1000=93.81% 00:20:30.413 lat (msec) : 2=0.23%, 4=0.02% 00:20:30.413 cpu : usr=89.65%, sys=8.89%, ctx=18, majf=0, minf=9 00:20:30.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.413 issued rwts: total=46164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:30.413 00:20:30.413 Run status group 0 (all jobs): 00:20:30.413 READ: bw=36.1MiB/s (37.8MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=361MiB (378MB), run=10001-10001msec 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 ************************************ 00:20:30.413 END TEST fio_dif_1_multi_subsystems 00:20:30.413 ************************************ 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 00:20:30.413 real 0m11.166s 00:20:30.413 user 0m18.766s 00:20:30.413 sys 0m2.043s 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 19:58:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:30.413 19:58:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:30.413 19:58:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 ************************************ 00:20:30.413 START TEST fio_dif_rand_params 00:20:30.413 ************************************ 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 bdev_null0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:30.413 [2024-07-24 19:58:58.778391] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:30.413 { 00:20:30.413 "params": { 00:20:30.413 "name": "Nvme$subsystem", 00:20:30.413 "trtype": "$TEST_TRANSPORT", 00:20:30.413 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.413 "adrfam": "ipv4", 00:20:30.413 "trsvcid": "$NVMF_PORT", 00:20:30.413 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.413 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.413 "hdgst": ${hdgst:-false}, 00:20:30.413 "ddgst": ${ddgst:-false} 00:20:30.413 }, 00:20:30.413 "method": "bdev_nvme_attach_controller" 00:20:30.413 } 00:20:30.413 EOF 00:20:30.413 )") 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:30.413 19:58:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:30.414 "params": { 00:20:30.414 "name": "Nvme0", 00:20:30.414 "trtype": "tcp", 00:20:30.414 "traddr": "10.0.0.2", 00:20:30.414 "adrfam": "ipv4", 00:20:30.414 "trsvcid": "4420", 00:20:30.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:30.414 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:30.414 "hdgst": false, 00:20:30.414 "ddgst": false 00:20:30.414 }, 00:20:30.414 "method": "bdev_nvme_attach_controller" 00:20:30.414 }' 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:30.414 19:58:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.414 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:30.414 ... 00:20:30.414 fio-3.35 00:20:30.414 Starting 3 threads 00:20:36.688 00:20:36.688 filename0: (groupid=0, jobs=1): err= 0: pid=82593: Wed Jul 24 19:59:04 2024 00:20:36.688 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5002msec) 00:20:36.688 slat (nsec): min=7416, max=45069, avg=14947.76, stdev=4832.18 00:20:36.688 clat (usec): min=11444, max=12673, avg=11596.74, stdev=150.86 00:20:36.688 lat (usec): min=11457, max=12697, avg=11611.69, stdev=150.79 00:20:36.688 clat percentiles (usec): 00:20:36.688 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:36.688 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:20:36.688 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:20:36.688 | 99.00th=[12125], 99.50th=[12125], 99.90th=[12649], 99.95th=[12649], 00:20:36.688 | 99.99th=[12649] 00:20:36.688 bw ( KiB/s): min=32256, max=33792, per=33.36%, avg=33024.00, stdev=543.06, samples=9 00:20:36.688 iops : min= 252, max= 264, avg=258.00, stdev= 4.24, samples=9 00:20:36.688 lat (msec) : 20=100.00% 00:20:36.688 cpu : usr=90.58%, sys=8.76%, ctx=10, majf=0, minf=9 00:20:36.688 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.688 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.688 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.688 filename0: (groupid=0, jobs=1): err= 0: pid=82594: Wed Jul 24 19:59:04 2024 00:20:36.688 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5001msec) 00:20:36.688 slat (nsec): min=7266, max=40230, avg=14794.72, stdev=4732.63 00:20:36.688 clat (usec): min=9238, max=15512, avg=11594.39, stdev=260.67 00:20:36.688 lat (usec): min=9246, max=15537, avg=11609.18, stdev=260.10 00:20:36.688 clat percentiles (usec): 00:20:36.688 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:36.688 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:20:36.688 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11863], 95.00th=[11863], 00:20:36.688 | 99.00th=[12125], 99.50th=[12125], 99.90th=[15533], 99.95th=[15533], 00:20:36.688 | 99.99th=[15533] 00:20:36.688 bw ( KiB/s): min=32256, max=33792, per=33.37%, avg=33031.11, stdev=532.05, samples=9 00:20:36.688 iops : min= 252, max= 264, avg=258.00, stdev= 4.24, samples=9 00:20:36.688 lat (msec) : 10=0.23%, 20=99.77% 00:20:36.688 cpu : usr=91.32%, sys=8.12%, ctx=7, majf=0, minf=9 00:20:36.688 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.688 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.688 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.688 filename0: (groupid=0, jobs=1): err= 0: pid=82595: Wed Jul 24 19:59:04 2024 00:20:36.688 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5004msec) 00:20:36.688 slat (nsec): min=6700, max=57264, avg=13926.39, stdev=4968.18 00:20:36.688 clat (usec): min=10552, max=14484, avg=11604.01, stdev=228.11 00:20:36.688 lat (usec): min=10560, max=14516, avg=11617.94, stdev=228.22 00:20:36.688 clat percentiles (usec): 00:20:36.688 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11469], 00:20:36.688 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:20:36.688 | 70.00th=[11600], 80.00th=[11600], 90.00th=[11863], 95.00th=[11994], 00:20:36.688 | 99.00th=[12125], 99.50th=[12256], 99.90th=[14484], 99.95th=[14484], 00:20:36.688 | 99.99th=[14484] 00:20:36.688 bw ( KiB/s): min=32256, max=33792, per=33.27%, avg=32938.67, stdev=461.51, samples=9 00:20:36.688 iops : min= 252, max= 264, avg=257.33, stdev= 3.61, samples=9 00:20:36.688 lat (msec) : 20=100.00% 00:20:36.688 cpu : usr=91.17%, sys=8.28%, ctx=22, majf=0, minf=9 00:20:36.688 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:36.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.688 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.688 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:36.688 00:20:36.688 Run status group 0 (all jobs): 00:20:36.688 READ: bw=96.7MiB/s (101MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=484MiB (507MB), run=5001-5004msec 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 bdev_null0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 [2024-07-24 19:59:04.812122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 bdev_null1 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:36.688 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.689 bdev_null2 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.689 { 00:20:36.689 "params": { 00:20:36.689 "name": "Nvme$subsystem", 00:20:36.689 "trtype": "$TEST_TRANSPORT", 00:20:36.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.689 "adrfam": "ipv4", 00:20:36.689 "trsvcid": "$NVMF_PORT", 00:20:36.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.689 "hdgst": ${hdgst:-false}, 00:20:36.689 "ddgst": ${ddgst:-false} 00:20:36.689 }, 00:20:36.689 "method": "bdev_nvme_attach_controller" 00:20:36.689 } 00:20:36.689 EOF 00:20:36.689 )") 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.689 { 00:20:36.689 "params": { 00:20:36.689 "name": "Nvme$subsystem", 00:20:36.689 "trtype": "$TEST_TRANSPORT", 00:20:36.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.689 "adrfam": "ipv4", 00:20:36.689 "trsvcid": "$NVMF_PORT", 00:20:36.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.689 "hdgst": ${hdgst:-false}, 00:20:36.689 "ddgst": ${ddgst:-false} 00:20:36.689 }, 00:20:36.689 "method": "bdev_nvme_attach_controller" 00:20:36.689 } 00:20:36.689 EOF 00:20:36.689 )") 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:36.689 { 00:20:36.689 "params": { 00:20:36.689 "name": "Nvme$subsystem", 00:20:36.689 "trtype": "$TEST_TRANSPORT", 00:20:36.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.689 "adrfam": "ipv4", 00:20:36.689 "trsvcid": "$NVMF_PORT", 00:20:36.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.689 "hdgst": ${hdgst:-false}, 00:20:36.689 "ddgst": ${ddgst:-false} 00:20:36.689 }, 00:20:36.689 "method": "bdev_nvme_attach_controller" 00:20:36.689 } 00:20:36.689 EOF 00:20:36.689 )") 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:36.689 "params": { 00:20:36.689 "name": "Nvme0", 00:20:36.689 "trtype": "tcp", 00:20:36.689 "traddr": "10.0.0.2", 00:20:36.689 "adrfam": "ipv4", 00:20:36.689 "trsvcid": "4420", 00:20:36.689 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:36.689 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:36.689 "hdgst": false, 00:20:36.689 "ddgst": false 00:20:36.689 }, 00:20:36.689 "method": "bdev_nvme_attach_controller" 00:20:36.689 },{ 00:20:36.689 "params": { 00:20:36.689 "name": "Nvme1", 00:20:36.689 "trtype": "tcp", 00:20:36.689 "traddr": "10.0.0.2", 00:20:36.689 "adrfam": "ipv4", 00:20:36.689 "trsvcid": "4420", 00:20:36.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.689 "hdgst": false, 00:20:36.689 "ddgst": false 00:20:36.689 }, 00:20:36.689 "method": "bdev_nvme_attach_controller" 00:20:36.689 },{ 00:20:36.689 "params": { 00:20:36.689 "name": "Nvme2", 00:20:36.689 "trtype": "tcp", 00:20:36.689 "traddr": "10.0.0.2", 00:20:36.689 "adrfam": "ipv4", 00:20:36.689 "trsvcid": "4420", 00:20:36.689 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.689 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.689 "hdgst": false, 00:20:36.689 "ddgst": false 00:20:36.689 }, 00:20:36.689 "method": "bdev_nvme_attach_controller" 00:20:36.689 }' 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:36.689 19:59:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.689 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:36.689 ... 00:20:36.689 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:36.689 ... 00:20:36.689 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:36.689 ... 00:20:36.689 fio-3.35 00:20:36.689 Starting 24 threads 00:20:58.624 00:20:58.624 filename0: (groupid=0, jobs=1): err= 0: pid=82690: Wed Jul 24 19:59:25 2024 00:20:58.624 read: IOPS=559, BW=2236KiB/s (2290kB/s)(21.9MiB/10021msec) 00:20:58.624 slat (usec): min=4, max=6546, avg=18.47, stdev=152.19 00:20:58.624 clat (usec): min=2404, max=78736, avg=28489.62, stdev=11242.96 00:20:58.624 lat (usec): min=2414, max=78751, avg=28508.10, stdev=11248.54 00:20:58.624 clat percentiles (usec): 00:20:58.624 | 1.00th=[11994], 5.00th=[14746], 10.00th=[16909], 20.00th=[22676], 00:20:58.624 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[25035], 00:20:58.624 | 70.00th=[29492], 80.00th=[35914], 90.00th=[47973], 95.00th=[52691], 00:20:58.624 | 99.00th=[60031], 99.50th=[69731], 99.90th=[71828], 99.95th=[79168], 00:20:58.624 | 99.99th=[79168] 00:20:58.624 bw ( KiB/s): min= 1394, max= 2816, per=4.03%, avg=2272.37, stdev=507.09, samples=19 00:20:58.624 iops : min= 348, max= 704, avg=568.05, stdev=126.81, samples=19 00:20:58.624 lat (msec) : 4=0.32%, 10=0.18%, 20=12.87%, 50=80.29%, 100=6.34% 00:20:58.624 cpu : usr=39.63%, sys=1.76%, ctx=1062, majf=0, minf=9 00:20:58.624 IO depths : 1=0.4%, 2=2.7%, 4=10.0%, 8=72.1%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:58.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.624 complete : 0=0.0%, 4=90.4%, 8=7.3%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.624 issued rwts: total=5602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.624 filename0: (groupid=0, jobs=1): err= 0: pid=82691: Wed Jul 24 19:59:25 2024 00:20:58.624 read: IOPS=541, BW=2164KiB/s (2216kB/s)(21.2MiB/10014msec) 00:20:58.624 slat (usec): min=4, max=8024, avg=25.46, stdev=226.58 00:20:58.624 clat (usec): min=9992, max=96826, avg=29461.27, stdev=12978.09 00:20:58.624 lat (usec): min=10006, max=96836, avg=29486.73, stdev=12984.61 00:20:58.624 clat percentiles (usec): 00:20:58.624 | 1.00th=[14222], 5.00th=[15664], 10.00th=[15926], 20.00th=[18482], 00:20:58.624 | 30.00th=[22938], 40.00th=[23987], 50.00th=[23987], 60.00th=[25822], 00:20:58.624 | 70.00th=[32113], 80.00th=[40633], 90.00th=[49021], 95.00th=[54789], 00:20:58.624 | 99.00th=[65274], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:20:58.624 | 99.99th=[96994] 00:20:58.624 bw ( KiB/s): min= 1029, max= 3128, per=3.76%, avg=2121.11, stdev=638.22, samples=19 00:20:58.624 iops : min= 257, max= 782, avg=530.26, stdev=159.58, samples=19 00:20:58.624 lat (msec) : 10=0.02%, 20=23.46%, 50=67.96%, 100=8.56% 00:20:58.624 cpu : usr=46.60%, sys=2.38%, ctx=1320, majf=0, minf=9 00:20:58.624 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=82.9%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:58.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.624 complete : 0=0.0%, 4=87.6%, 8=12.3%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.624 issued rwts: total=5418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.624 filename0: (groupid=0, jobs=1): err= 0: pid=82692: Wed Jul 24 19:59:25 2024 00:20:58.624 read: IOPS=536, BW=2148KiB/s (2199kB/s)(21.0MiB/10012msec) 00:20:58.624 slat (usec): min=4, max=4047, avg=26.76, stdev=171.17 00:20:58.624 clat (msec): min=9, max=119, avg=29.68, stdev=14.33 00:20:58.624 lat (msec): min=9, max=119, avg=29.70, stdev=14.33 00:20:58.624 clat percentiles (msec): 00:20:58.624 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 18], 00:20:58.624 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 26], 00:20:58.624 | 70.00th=[ 33], 80.00th=[ 46], 90.00th=[ 51], 95.00th=[ 56], 00:20:58.624 | 99.00th=[ 69], 99.50th=[ 93], 99.90th=[ 99], 99.95th=[ 99], 00:20:58.624 | 99.99th=[ 121] 00:20:58.624 bw ( KiB/s): min= 859, max= 3200, per=3.70%, avg=2087.74, stdev=719.39, samples=19 00:20:58.624 iops : min= 214, max= 800, avg=521.89, stdev=179.92, samples=19 00:20:58.624 lat (msec) : 10=0.06%, 20=26.79%, 50=62.61%, 100=10.51%, 250=0.04% 00:20:58.624 cpu : usr=50.76%, sys=2.12%, ctx=1221, majf=0, minf=9 00:20:58.624 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:58.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.624 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.624 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.624 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.625 filename0: (groupid=0, jobs=1): err= 0: pid=82693: Wed Jul 24 19:59:25 2024 00:20:58.625 read: IOPS=552, BW=2211KiB/s (2264kB/s)(21.6MiB/10001msec) 00:20:58.625 slat (usec): min=3, max=4041, avg=32.29, stdev=204.92 00:20:58.625 clat (usec): min=1025, max=151741, avg=28790.53, stdev=14591.60 00:20:58.625 lat (usec): min=1033, max=151757, avg=28822.82, stdev=14590.23 00:20:58.625 clat percentiles (msec): 00:20:58.625 | 1.00th=[ 3], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:20:58.625 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 26], 00:20:58.625 | 70.00th=[ 32], 80.00th=[ 41], 90.00th=[ 51], 95.00th=[ 55], 00:20:58.625 | 99.00th=[ 69], 99.50th=[ 81], 99.90th=[ 129], 99.95th=[ 129], 00:20:58.625 | 99.99th=[ 153] 00:20:58.625 bw ( KiB/s): min= 908, max= 3192, per=3.80%, avg=2140.53, stdev=701.46, samples=19 00:20:58.625 iops : min= 227, max= 798, avg=535.05, stdev=175.42, samples=19 00:20:58.625 lat (msec) : 2=0.83%, 4=0.72%, 10=0.69%, 20=25.34%, 50=62.63% 00:20:58.625 lat (msec) : 100=9.50%, 250=0.29% 00:20:58.625 cpu : usr=48.07%, sys=1.79%, ctx=1427, majf=0, minf=9 00:20:58.625 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 issued rwts: total=5529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.625 filename0: (groupid=0, jobs=1): err= 0: pid=82694: Wed Jul 24 19:59:25 2024 00:20:58.625 read: IOPS=718, BW=2874KiB/s (2943kB/s)(28.3MiB/10071msec) 00:20:58.625 slat (usec): min=7, max=8023, avg=23.32, stdev=228.05 00:20:58.625 clat (usec): min=2567, max=88886, avg=22034.18, stdev=8666.35 00:20:58.625 lat (usec): min=2575, max=88899, avg=22057.51, stdev=8666.41 00:20:58.625 clat percentiles (usec): 00:20:58.625 | 1.00th=[ 5800], 5.00th=[ 7963], 10.00th=[11994], 20.00th=[16057], 00:20:58.625 | 30.00th=[18220], 40.00th=[21103], 50.00th=[22152], 60.00th=[23462], 00:20:58.625 | 70.00th=[23987], 80.00th=[25297], 90.00th=[31851], 95.00th=[35914], 00:20:58.625 | 99.00th=[56886], 99.50th=[62653], 99.90th=[83362], 99.95th=[83362], 00:20:58.625 | 99.99th=[88605] 00:20:58.625 bw ( KiB/s): min= 1408, max= 5392, per=5.12%, avg=2884.45, stdev=740.91, samples=20 00:20:58.625 iops : min= 352, max= 1348, avg=721.05, stdev=185.20, samples=20 00:20:58.625 lat (msec) : 4=0.43%, 10=7.74%, 20=27.33%, 50=63.28%, 100=1.23% 00:20:58.625 cpu : usr=41.92%, sys=2.38%, ctx=1304, majf=0, minf=9 00:20:58.625 IO depths : 1=1.5%, 2=6.2%, 4=19.4%, 8=61.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:20:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 complete : 0=0.0%, 4=92.9%, 8=2.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 issued rwts: total=7235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.625 filename0: (groupid=0, jobs=1): err= 0: pid=82695: Wed Jul 24 19:59:25 2024 00:20:58.625 read: IOPS=545, BW=2182KiB/s (2234kB/s)(21.3MiB/10007msec) 00:20:58.625 slat (usec): min=5, max=4046, avg=33.59, stdev=230.29 00:20:58.625 clat (msec): min=6, max=131, avg=29.20, stdev=13.87 00:20:58.625 lat (msec): min=6, max=131, avg=29.23, stdev=13.87 00:20:58.625 clat percentiles (msec): 00:20:58.625 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:20:58.625 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 26], 00:20:58.625 | 70.00th=[ 33], 80.00th=[ 42], 90.00th=[ 50], 95.00th=[ 55], 00:20:58.625 | 99.00th=[ 70], 99.50th=[ 82], 99.90th=[ 108], 99.95th=[ 108], 00:20:58.625 | 99.99th=[ 132] 00:20:58.625 bw ( KiB/s): min= 881, max= 3192, per=3.76%, avg=2120.47, stdev=708.11, samples=19 00:20:58.625 iops : min= 220, max= 798, avg=530.11, stdev=177.05, samples=19 00:20:58.625 lat (msec) : 10=0.37%, 20=27.18%, 50=62.96%, 100=9.20%, 250=0.29% 00:20:58.625 cpu : usr=52.07%, sys=2.14%, ctx=1202, majf=0, minf=9 00:20:58.625 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=79.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 complete : 0=0.0%, 4=88.1%, 8=11.2%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 issued rwts: total=5459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.625 filename0: (groupid=0, jobs=1): err= 0: pid=82696: Wed Jul 24 19:59:25 2024 00:20:58.625 read: IOPS=707, BW=2828KiB/s (2896kB/s)(27.7MiB/10040msec) 00:20:58.625 slat (usec): min=4, max=8027, avg=19.94, stdev=215.73 00:20:58.625 clat (usec): min=1192, max=81945, avg=22469.13, stdev=8604.93 00:20:58.625 lat (usec): min=1206, max=81963, avg=22489.07, stdev=8607.14 00:20:58.625 clat percentiles (usec): 00:20:58.625 | 1.00th=[ 6587], 5.00th=[10683], 10.00th=[13042], 20.00th=[15664], 00:20:58.625 | 30.00th=[18744], 40.00th=[21365], 50.00th=[22938], 60.00th=[23725], 00:20:58.625 | 70.00th=[23987], 80.00th=[25560], 90.00th=[31065], 95.00th=[36439], 00:20:58.625 | 99.00th=[54264], 99.50th=[61080], 99.90th=[74974], 99.95th=[77071], 00:20:58.625 | 99.99th=[82314] 00:20:58.625 bw ( KiB/s): min= 1240, max= 3584, per=5.02%, avg=2831.60, stdev=502.36, samples=20 00:20:58.625 iops : min= 310, max= 896, avg=707.90, stdev=125.59, samples=20 00:20:58.625 lat (msec) : 2=0.28%, 4=0.55%, 10=4.00%, 20=27.61%, 50=66.09% 00:20:58.625 lat (msec) : 100=1.46% 00:20:58.625 cpu : usr=40.98%, sys=2.15%, ctx=1254, majf=0, minf=9 00:20:58.625 IO depths : 1=1.0%, 2=5.4%, 4=18.3%, 8=62.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:20:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 complete : 0=0.0%, 4=92.6%, 8=3.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 issued rwts: total=7099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.625 filename0: (groupid=0, jobs=1): err= 0: pid=82697: Wed Jul 24 19:59:25 2024 00:20:58.625 read: IOPS=536, BW=2146KiB/s (2197kB/s)(21.0MiB/10015msec) 00:20:58.625 slat (usec): min=4, max=4025, avg=17.11, stdev=122.49 00:20:58.625 clat (usec): min=11394, max=98430, avg=29740.98, stdev=14169.20 00:20:58.625 lat (usec): min=11408, max=98442, avg=29758.09, stdev=14171.33 00:20:58.625 clat percentiles (usec): 00:20:58.625 | 1.00th=[13829], 5.00th=[14877], 10.00th=[15533], 20.00th=[17695], 00:20:58.625 | 30.00th=[21890], 40.00th=[23462], 50.00th=[23987], 60.00th=[25822], 00:20:58.625 | 70.00th=[32375], 80.00th=[45876], 90.00th=[51643], 95.00th=[55837], 00:20:58.625 | 99.00th=[71828], 99.50th=[81265], 99.90th=[89654], 99.95th=[95945], 00:20:58.625 | 99.99th=[98042] 00:20:58.625 bw ( KiB/s): min= 835, max= 3352, per=3.71%, avg=2088.16, stdev=714.06, samples=19 00:20:58.625 iops : min= 208, max= 838, avg=522.00, stdev=178.59, samples=19 00:20:58.625 lat (msec) : 20=25.27%, 50=63.58%, 100=11.15% 00:20:58.625 cpu : usr=56.62%, sys=2.65%, ctx=1119, majf=0, minf=9 00:20:58.625 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 issued rwts: total=5373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.625 filename1: (groupid=0, jobs=1): err= 0: pid=82698: Wed Jul 24 19:59:25 2024 00:20:58.625 read: IOPS=546, BW=2187KiB/s (2240kB/s)(21.4MiB/10008msec) 00:20:58.625 slat (usec): min=4, max=4044, avg=28.99, stdev=187.73 00:20:58.625 clat (msec): min=10, max=128, avg=29.14, stdev=14.04 00:20:58.625 lat (msec): min=10, max=128, avg=29.17, stdev=14.05 00:20:58.625 clat percentiles (msec): 00:20:58.625 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:20:58.625 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 26], 00:20:58.625 | 70.00th=[ 33], 80.00th=[ 43], 90.00th=[ 51], 95.00th=[ 56], 00:20:58.625 | 99.00th=[ 66], 99.50th=[ 70], 99.90th=[ 110], 99.95th=[ 110], 00:20:58.625 | 99.99th=[ 129] 00:20:58.625 bw ( KiB/s): min= 842, max= 3240, per=3.79%, avg=2134.84, stdev=722.97, samples=19 00:20:58.625 iops : min= 210, max= 810, avg=533.68, stdev=180.79, samples=19 00:20:58.625 lat (msec) : 20=29.15%, 50=60.54%, 100=10.01%, 250=0.29% 00:20:58.625 cpu : usr=46.47%, sys=2.43%, ctx=1247, majf=0, minf=9 00:20:58.625 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 complete : 0=0.0%, 4=87.7%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 issued rwts: total=5472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.625 filename1: (groupid=0, jobs=1): err= 0: pid=82699: Wed Jul 24 19:59:25 2024 00:20:58.625 read: IOPS=678, BW=2716KiB/s (2781kB/s)(26.7MiB/10074msec) 00:20:58.625 slat (usec): min=5, max=8033, avg=22.94, stdev=290.53 00:20:58.625 clat (usec): min=1166, max=95964, avg=23383.55, stdev=9560.12 00:20:58.625 lat (usec): min=1175, max=95983, avg=23406.48, stdev=9563.52 00:20:58.625 clat percentiles (usec): 00:20:58.625 | 1.00th=[ 6128], 5.00th=[11863], 10.00th=[12125], 20.00th=[14746], 00:20:58.625 | 30.00th=[21890], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:20:58.625 | 70.00th=[23987], 80.00th=[25035], 90.00th=[35390], 95.00th=[35914], 00:20:58.625 | 99.00th=[58459], 99.50th=[72877], 99.90th=[95945], 99.95th=[95945], 00:20:58.625 | 99.99th=[95945] 00:20:58.625 bw ( KiB/s): min= 1280, max= 4704, per=4.84%, avg=2726.25, stdev=608.72, samples=20 00:20:58.625 iops : min= 320, max= 1176, avg=681.50, stdev=152.16, samples=20 00:20:58.625 lat (msec) : 2=0.03%, 4=0.51%, 10=1.35%, 20=21.67%, 50=75.24% 00:20:58.625 lat (msec) : 100=1.20% 00:20:58.625 cpu : usr=31.38%, sys=1.90%, ctx=863, majf=0, minf=9 00:20:58.625 IO depths : 1=1.9%, 2=6.4%, 4=19.6%, 8=60.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:20:58.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 complete : 0=0.0%, 4=93.1%, 8=2.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.625 issued rwts: total=6839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.625 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.625 filename1: (groupid=0, jobs=1): err= 0: pid=82700: Wed Jul 24 19:59:25 2024 00:20:58.625 read: IOPS=696, BW=2787KiB/s (2853kB/s)(27.3MiB/10034msec) 00:20:58.625 slat (usec): min=4, max=8032, avg=23.26, stdev=245.50 00:20:58.625 clat (usec): min=882, max=87870, avg=22776.18, stdev=8609.19 00:20:58.625 lat (usec): min=890, max=87892, avg=22799.44, stdev=8615.06 00:20:58.625 clat percentiles (usec): 00:20:58.625 | 1.00th=[ 5014], 5.00th=[11863], 10.00th=[14091], 20.00th=[16057], 00:20:58.626 | 30.00th=[18744], 40.00th=[21627], 50.00th=[23200], 60.00th=[23987], 00:20:58.626 | 70.00th=[24249], 80.00th=[26608], 90.00th=[31589], 95.00th=[35390], 00:20:58.626 | 99.00th=[57934], 99.50th=[71828], 99.90th=[80217], 99.95th=[80217], 00:20:58.626 | 99.99th=[87557] 00:20:58.626 bw ( KiB/s): min= 1264, max= 3936, per=4.95%, avg=2789.05, stdev=537.73, samples=20 00:20:58.626 iops : min= 316, max= 984, avg=697.25, stdev=134.44, samples=20 00:20:58.626 lat (usec) : 1000=0.03% 00:20:58.626 lat (msec) : 2=0.03%, 4=0.89%, 10=1.80%, 20=31.10%, 50=64.92% 00:20:58.626 lat (msec) : 100=1.23% 00:20:58.626 cpu : usr=41.06%, sys=2.08%, ctx=1203, majf=0, minf=9 00:20:58.626 IO depths : 1=1.6%, 2=6.8%, 4=21.6%, 8=58.5%, 16=11.5%, 32=0.0%, >=64=0.0% 00:20:58.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 complete : 0=0.0%, 4=93.5%, 8=1.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 issued rwts: total=6990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.626 filename1: (groupid=0, jobs=1): err= 0: pid=82701: Wed Jul 24 19:59:25 2024 00:20:58.626 read: IOPS=547, BW=2192KiB/s (2244kB/s)(21.4MiB/10008msec) 00:20:58.626 slat (usec): min=5, max=5030, avg=31.89, stdev=225.22 00:20:58.626 clat (msec): min=7, max=131, avg=29.07, stdev=13.84 00:20:58.626 lat (msec): min=7, max=131, avg=29.10, stdev=13.85 00:20:58.626 clat percentiles (msec): 00:20:58.626 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:20:58.626 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 26], 00:20:58.626 | 70.00th=[ 32], 80.00th=[ 40], 90.00th=[ 50], 95.00th=[ 56], 00:20:58.626 | 99.00th=[ 70], 99.50th=[ 82], 99.90th=[ 108], 99.95th=[ 108], 00:20:58.626 | 99.99th=[ 132] 00:20:58.626 bw ( KiB/s): min= 880, max= 3192, per=3.78%, avg=2128.42, stdev=695.75, samples=19 00:20:58.626 iops : min= 220, max= 798, avg=532.11, stdev=173.94, samples=19 00:20:58.626 lat (msec) : 10=0.16%, 20=26.90%, 50=63.13%, 100=9.52%, 250=0.29% 00:20:58.626 cpu : usr=46.09%, sys=2.01%, ctx=1565, majf=0, minf=9 00:20:58.626 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:58.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 issued rwts: total=5484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.626 filename1: (groupid=0, jobs=1): err= 0: pid=82702: Wed Jul 24 19:59:25 2024 00:20:58.626 read: IOPS=544, BW=2180KiB/s (2232kB/s)(21.3MiB/10008msec) 00:20:58.626 slat (usec): min=5, max=8035, avg=40.37, stdev=320.78 00:20:58.626 clat (msec): min=7, max=119, avg=29.17, stdev=14.14 00:20:58.626 lat (msec): min=7, max=119, avg=29.21, stdev=14.15 00:20:58.626 clat percentiles (msec): 00:20:58.626 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:20:58.626 | 30.00th=[ 21], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 26], 00:20:58.626 | 70.00th=[ 32], 80.00th=[ 41], 90.00th=[ 52], 95.00th=[ 56], 00:20:58.626 | 99.00th=[ 71], 99.50th=[ 86], 99.90th=[ 96], 99.95th=[ 96], 00:20:58.626 | 99.99th=[ 121] 00:20:58.626 bw ( KiB/s): min= 866, max= 3200, per=3.76%, avg=2121.37, stdev=724.01, samples=19 00:20:58.626 iops : min= 216, max= 800, avg=530.32, stdev=181.05, samples=19 00:20:58.626 lat (msec) : 10=0.17%, 20=29.12%, 50=59.85%, 100=10.84%, 250=0.04% 00:20:58.626 cpu : usr=45.97%, sys=2.07%, ctx=1598, majf=0, minf=9 00:20:58.626 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:58.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 issued rwts: total=5454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.626 filename1: (groupid=0, jobs=1): err= 0: pid=82703: Wed Jul 24 19:59:25 2024 00:20:58.626 read: IOPS=780, BW=3121KiB/s (3196kB/s)(30.7MiB/10059msec) 00:20:58.626 slat (usec): min=5, max=8031, avg=21.22, stdev=225.11 00:20:58.626 clat (usec): min=762, max=86086, avg=20326.43, stdev=10297.93 00:20:58.626 lat (usec): min=771, max=86095, avg=20347.65, stdev=10298.27 00:20:58.626 clat percentiles (usec): 00:20:58.626 | 1.00th=[ 1795], 5.00th=[ 2933], 10.00th=[ 5276], 20.00th=[12256], 00:20:58.626 | 30.00th=[15926], 40.00th=[19530], 50.00th=[22152], 60.00th=[23725], 00:20:58.626 | 70.00th=[23987], 80.00th=[25035], 90.00th=[33424], 95.00th=[35914], 00:20:58.626 | 99.00th=[47973], 99.50th=[60031], 99.90th=[73925], 99.95th=[73925], 00:20:58.626 | 99.99th=[86508] 00:20:58.626 bw ( KiB/s): min= 1640, max=10496, per=5.56%, avg=3132.80, stdev=1804.31, samples=20 00:20:58.626 iops : min= 410, max= 2624, avg=783.20, stdev=451.08, samples=20 00:20:58.626 lat (usec) : 1000=0.15% 00:20:58.626 lat (msec) : 2=2.64%, 4=4.61%, 10=9.75%, 20=23.73%, 50=58.26% 00:20:58.626 lat (msec) : 100=0.87% 00:20:58.626 cpu : usr=37.98%, sys=2.07%, ctx=1079, majf=0, minf=9 00:20:58.626 IO depths : 1=1.0%, 2=6.0%, 4=20.8%, 8=59.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:20:58.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 complete : 0=0.0%, 4=93.4%, 8=1.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 issued rwts: total=7848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.626 filename1: (groupid=0, jobs=1): err= 0: pid=82704: Wed Jul 24 19:59:25 2024 00:20:58.626 read: IOPS=524, BW=2097KiB/s (2148kB/s)(20.5MiB/10020msec) 00:20:58.626 slat (usec): min=4, max=7037, avg=47.20, stdev=313.39 00:20:58.626 clat (usec): min=13855, max=91820, avg=30298.37, stdev=13635.30 00:20:58.626 lat (usec): min=13879, max=91835, avg=30345.57, stdev=13639.20 00:20:58.626 clat percentiles (usec): 00:20:58.626 | 1.00th=[14615], 5.00th=[15533], 10.00th=[15926], 20.00th=[19006], 00:20:58.626 | 30.00th=[22676], 40.00th=[23725], 50.00th=[24249], 60.00th=[28443], 00:20:58.626 | 70.00th=[32900], 80.00th=[45351], 90.00th=[51643], 95.00th=[55837], 00:20:58.626 | 99.00th=[70779], 99.50th=[74974], 99.90th=[77071], 99.95th=[86508], 00:20:58.626 | 99.99th=[91751] 00:20:58.626 bw ( KiB/s): min= 904, max= 3136, per=3.65%, avg=2056.84, stdev=675.94, samples=19 00:20:58.626 iops : min= 226, max= 784, avg=514.21, stdev=168.98, samples=19 00:20:58.626 lat (msec) : 20=23.62%, 50=64.31%, 100=12.07% 00:20:58.626 cpu : usr=47.10%, sys=2.15%, ctx=1301, majf=0, minf=9 00:20:58.626 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:20:58.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 complete : 0=0.0%, 4=88.4%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 issued rwts: total=5254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.626 filename1: (groupid=0, jobs=1): err= 0: pid=82705: Wed Jul 24 19:59:25 2024 00:20:58.626 read: IOPS=554, BW=2217KiB/s (2270kB/s)(21.7MiB/10004msec) 00:20:58.626 slat (usec): min=4, max=9035, avg=34.15, stdev=256.58 00:20:58.626 clat (msec): min=2, max=156, avg=28.72, stdev=14.28 00:20:58.626 lat (msec): min=2, max=156, avg=28.75, stdev=14.28 00:20:58.626 clat percentiles (msec): 00:20:58.626 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:20:58.626 | 30.00th=[ 21], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 27], 00:20:58.626 | 70.00th=[ 32], 80.00th=[ 41], 90.00th=[ 50], 95.00th=[ 55], 00:20:58.626 | 99.00th=[ 65], 99.50th=[ 67], 99.90th=[ 133], 99.95th=[ 133], 00:20:58.626 | 99.99th=[ 157] 00:20:58.626 bw ( KiB/s): min= 897, max= 3272, per=3.84%, avg=2166.00, stdev=711.08, samples=19 00:20:58.626 iops : min= 224, max= 818, avg=541.42, stdev=177.83, samples=19 00:20:58.626 lat (msec) : 4=0.40%, 10=0.76%, 20=28.61%, 50=61.62%, 100=8.33% 00:20:58.626 lat (msec) : 250=0.29% 00:20:58.626 cpu : usr=50.66%, sys=2.20%, ctx=1498, majf=0, minf=9 00:20:58.626 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:58.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 issued rwts: total=5544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.626 filename2: (groupid=0, jobs=1): err= 0: pid=82706: Wed Jul 24 19:59:25 2024 00:20:58.626 read: IOPS=553, BW=2215KiB/s (2268kB/s)(21.6MiB/10007msec) 00:20:58.626 slat (usec): min=3, max=5040, avg=25.15, stdev=151.93 00:20:58.626 clat (msec): min=6, max=128, avg=28.78, stdev=13.53 00:20:58.626 lat (msec): min=6, max=128, avg=28.80, stdev=13.53 00:20:58.626 clat percentiles (msec): 00:20:58.626 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:20:58.626 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 28], 00:20:58.626 | 70.00th=[ 32], 80.00th=[ 40], 90.00th=[ 51], 95.00th=[ 54], 00:20:58.626 | 99.00th=[ 67], 99.50th=[ 81], 99.90th=[ 105], 99.95th=[ 105], 00:20:58.626 | 99.99th=[ 129] 00:20:58.626 bw ( KiB/s): min= 897, max= 3376, per=3.84%, avg=2163.42, stdev=712.68, samples=19 00:20:58.626 iops : min= 224, max= 844, avg=540.84, stdev=178.20, samples=19 00:20:58.626 lat (msec) : 10=0.36%, 20=23.93%, 50=65.81%, 100=9.62%, 250=0.29% 00:20:58.626 cpu : usr=50.21%, sys=1.79%, ctx=1580, majf=0, minf=9 00:20:58.626 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:58.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.626 issued rwts: total=5542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.626 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.626 filename2: (groupid=0, jobs=1): err= 0: pid=82707: Wed Jul 24 19:59:25 2024 00:20:58.626 read: IOPS=507, BW=2032KiB/s (2081kB/s)(19.9MiB/10017msec) 00:20:58.626 slat (usec): min=4, max=8059, avg=25.35, stdev=235.80 00:20:58.626 clat (usec): min=4014, max=87988, avg=31342.38, stdev=12870.93 00:20:58.626 lat (usec): min=4024, max=88005, avg=31367.73, stdev=12874.93 00:20:58.626 clat percentiles (usec): 00:20:58.626 | 1.00th=[ 8455], 5.00th=[15795], 10.00th=[16319], 20.00th=[22152], 00:20:58.626 | 30.00th=[23987], 40.00th=[24249], 50.00th=[26870], 60.00th=[31589], 00:20:58.626 | 70.00th=[35914], 80.00th=[45876], 90.00th=[51119], 95.00th=[55313], 00:20:58.626 | 99.00th=[64226], 99.50th=[66847], 99.90th=[78119], 99.95th=[78119], 00:20:58.627 | 99.99th=[87557] 00:20:58.627 bw ( KiB/s): min= 1282, max= 3008, per=3.59%, avg=2021.32, stdev=601.87, samples=19 00:20:58.627 iops : min= 320, max= 752, avg=505.21, stdev=150.55, samples=19 00:20:58.627 lat (msec) : 10=1.08%, 20=15.04%, 50=72.70%, 100=11.18% 00:20:58.627 cpu : usr=52.12%, sys=2.31%, ctx=1151, majf=0, minf=9 00:20:58.627 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:58.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 complete : 0=0.0%, 4=89.0%, 8=9.8%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 issued rwts: total=5088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.627 filename2: (groupid=0, jobs=1): err= 0: pid=82708: Wed Jul 24 19:59:25 2024 00:20:58.627 read: IOPS=528, BW=2115KiB/s (2166kB/s)(20.7MiB/10012msec) 00:20:58.627 slat (usec): min=4, max=5030, avg=25.30, stdev=199.64 00:20:58.627 clat (msec): min=9, max=121, avg=30.15, stdev=14.23 00:20:58.627 lat (msec): min=9, max=121, avg=30.17, stdev=14.23 00:20:58.627 clat percentiles (msec): 00:20:58.627 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 21], 00:20:58.627 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 28], 00:20:58.627 | 70.00th=[ 32], 80.00th=[ 44], 90.00th=[ 53], 95.00th=[ 55], 00:20:58.627 | 99.00th=[ 79], 99.50th=[ 96], 99.90th=[ 109], 99.95th=[ 109], 00:20:58.627 | 99.99th=[ 122] 00:20:58.627 bw ( KiB/s): min= 731, max= 3032, per=3.66%, avg=2060.79, stdev=659.64, samples=19 00:20:58.627 iops : min= 182, max= 758, avg=515.16, stdev=164.99, samples=19 00:20:58.627 lat (msec) : 10=0.06%, 20=19.43%, 50=68.67%, 100=11.54%, 250=0.30% 00:20:58.627 cpu : usr=50.72%, sys=2.19%, ctx=1353, majf=0, minf=9 00:20:58.627 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=81.7%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:58.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 issued rwts: total=5295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.627 filename2: (groupid=0, jobs=1): err= 0: pid=82709: Wed Jul 24 19:59:25 2024 00:20:58.627 read: IOPS=533, BW=2136KiB/s (2187kB/s)(20.9MiB/10015msec) 00:20:58.627 slat (usec): min=4, max=8062, avg=30.30, stdev=224.54 00:20:58.627 clat (msec): min=10, max=111, avg=29.82, stdev=13.98 00:20:58.627 lat (msec): min=10, max=111, avg=29.85, stdev=13.98 00:20:58.627 clat percentiles (msec): 00:20:58.627 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 18], 00:20:58.627 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 27], 00:20:58.627 | 70.00th=[ 33], 80.00th=[ 47], 90.00th=[ 52], 95.00th=[ 56], 00:20:58.627 | 99.00th=[ 70], 99.50th=[ 75], 99.90th=[ 90], 99.95th=[ 94], 00:20:58.627 | 99.99th=[ 112] 00:20:58.627 bw ( KiB/s): min= 960, max= 3384, per=3.70%, avg=2086.74, stdev=724.18, samples=19 00:20:58.627 iops : min= 240, max= 846, avg=521.68, stdev=181.04, samples=19 00:20:58.627 lat (msec) : 20=24.72%, 50=64.28%, 100=10.96%, 250=0.04% 00:20:58.627 cpu : usr=54.14%, sys=2.46%, ctx=1253, majf=0, minf=9 00:20:58.627 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:58.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 complete : 0=0.0%, 4=88.2%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 issued rwts: total=5347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.627 filename2: (groupid=0, jobs=1): err= 0: pid=82710: Wed Jul 24 19:59:25 2024 00:20:58.627 read: IOPS=692, BW=2770KiB/s (2836kB/s)(27.1MiB/10037msec) 00:20:58.627 slat (usec): min=3, max=8034, avg=26.36, stdev=273.67 00:20:58.627 clat (usec): min=1565, max=81855, avg=22889.88, stdev=8296.28 00:20:58.627 lat (usec): min=1576, max=81875, avg=22916.24, stdev=8293.69 00:20:58.627 clat percentiles (usec): 00:20:58.627 | 1.00th=[ 5145], 5.00th=[11731], 10.00th=[12387], 20.00th=[16188], 00:20:58.627 | 30.00th=[20055], 40.00th=[21890], 50.00th=[22938], 60.00th=[23725], 00:20:58.627 | 70.00th=[24249], 80.00th=[27395], 90.00th=[32113], 95.00th=[35914], 00:20:58.627 | 99.00th=[55837], 99.50th=[60556], 99.90th=[71828], 99.95th=[81265], 00:20:58.627 | 99.99th=[82314] 00:20:58.627 bw ( KiB/s): min= 1664, max= 4573, per=4.92%, avg=2775.70, stdev=541.01, samples=20 00:20:58.627 iops : min= 416, max= 1143, avg=693.90, stdev=135.21, samples=20 00:20:58.627 lat (msec) : 2=0.03%, 4=0.73%, 10=1.88%, 20=27.01%, 50=69.29% 00:20:58.627 lat (msec) : 100=1.05% 00:20:58.627 cpu : usr=39.26%, sys=1.79%, ctx=1224, majf=0, minf=10 00:20:58.627 IO depths : 1=1.4%, 2=5.9%, 4=19.0%, 8=61.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:20:58.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 complete : 0=0.0%, 4=92.9%, 8=2.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 issued rwts: total=6950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.627 filename2: (groupid=0, jobs=1): err= 0: pid=82711: Wed Jul 24 19:59:25 2024 00:20:58.627 read: IOPS=675, BW=2702KiB/s (2767kB/s)(26.6MiB/10062msec) 00:20:58.627 slat (usec): min=3, max=16024, avg=19.11, stdev=253.96 00:20:58.627 clat (usec): min=1371, max=92773, avg=23513.75, stdev=8866.12 00:20:58.627 lat (usec): min=1380, max=92788, avg=23532.86, stdev=8870.59 00:20:58.627 clat percentiles (usec): 00:20:58.627 | 1.00th=[ 6259], 5.00th=[11863], 10.00th=[14746], 20.00th=[17695], 00:20:58.627 | 30.00th=[20579], 40.00th=[22414], 50.00th=[23462], 60.00th=[23987], 00:20:58.627 | 70.00th=[24249], 80.00th=[26346], 90.00th=[34866], 95.00th=[35914], 00:20:58.627 | 99.00th=[55313], 99.50th=[71828], 99.90th=[83362], 99.95th=[84411], 00:20:58.627 | 99.99th=[92799] 00:20:58.627 bw ( KiB/s): min= 1384, max= 3736, per=4.81%, avg=2712.40, stdev=486.91, samples=20 00:20:58.627 iops : min= 346, max= 934, avg=678.10, stdev=121.73, samples=20 00:20:58.627 lat (msec) : 2=0.03%, 4=0.66%, 10=2.90%, 20=25.08%, 50=70.25% 00:20:58.627 lat (msec) : 100=1.07% 00:20:58.627 cpu : usr=35.59%, sys=2.11%, ctx=1226, majf=0, minf=9 00:20:58.627 IO depths : 1=1.8%, 2=6.7%, 4=20.4%, 8=59.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:20:58.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 complete : 0=0.0%, 4=93.2%, 8=1.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 issued rwts: total=6797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.627 filename2: (groupid=0, jobs=1): err= 0: pid=82712: Wed Jul 24 19:59:25 2024 00:20:58.627 read: IOPS=554, BW=2217KiB/s (2270kB/s)(21.7MiB/10004msec) 00:20:58.627 slat (usec): min=4, max=4084, avg=30.64, stdev=201.93 00:20:58.627 clat (msec): min=2, max=143, avg=28.72, stdev=14.18 00:20:58.627 lat (msec): min=2, max=143, avg=28.75, stdev=14.17 00:20:58.627 clat percentiles (msec): 00:20:58.627 | 1.00th=[ 11], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:20:58.627 | 30.00th=[ 20], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 26], 00:20:58.627 | 70.00th=[ 32], 80.00th=[ 41], 90.00th=[ 50], 95.00th=[ 56], 00:20:58.627 | 99.00th=[ 68], 99.50th=[ 78], 99.90th=[ 122], 99.95th=[ 122], 00:20:58.627 | 99.99th=[ 144] 00:20:58.627 bw ( KiB/s): min= 768, max= 3232, per=3.80%, avg=2144.00, stdev=731.20, samples=19 00:20:58.627 iops : min= 192, max= 808, avg=536.00, stdev=182.80, samples=19 00:20:58.627 lat (msec) : 4=0.05%, 10=0.87%, 20=29.52%, 50=60.38%, 100=8.89% 00:20:58.627 lat (msec) : 250=0.29% 00:20:58.627 cpu : usr=47.35%, sys=1.87%, ctx=1393, majf=0, minf=9 00:20:58.627 IO depths : 1=0.1%, 2=0.8%, 4=3.2%, 8=80.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:58.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 issued rwts: total=5545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.627 filename2: (groupid=0, jobs=1): err= 0: pid=82713: Wed Jul 24 19:59:25 2024 00:20:58.627 read: IOPS=536, BW=2144KiB/s (2196kB/s)(21.0MiB/10018msec) 00:20:58.627 slat (usec): min=4, max=4050, avg=20.45, stdev=155.03 00:20:58.627 clat (usec): min=11433, max=90480, avg=29742.46, stdev=13736.57 00:20:58.627 lat (usec): min=11441, max=90494, avg=29762.91, stdev=13740.94 00:20:58.627 clat percentiles (usec): 00:20:58.627 | 1.00th=[14353], 5.00th=[15270], 10.00th=[15926], 20.00th=[17957], 00:20:58.627 | 30.00th=[22414], 40.00th=[23725], 50.00th=[23987], 60.00th=[25560], 00:20:58.627 | 70.00th=[32637], 80.00th=[43254], 90.00th=[50594], 95.00th=[55837], 00:20:58.627 | 99.00th=[68682], 99.50th=[79168], 99.90th=[90702], 99.95th=[90702], 00:20:58.627 | 99.99th=[90702] 00:20:58.627 bw ( KiB/s): min= 1008, max= 3248, per=3.80%, avg=2141.60, stdev=719.18, samples=20 00:20:58.627 iops : min= 252, max= 812, avg=535.40, stdev=179.79, samples=20 00:20:58.627 lat (msec) : 20=24.34%, 50=64.75%, 100=10.91% 00:20:58.627 cpu : usr=48.72%, sys=2.24%, ctx=1498, majf=0, minf=9 00:20:58.627 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:58.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.627 issued rwts: total=5370,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.627 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:58.627 00:20:58.627 Run status group 0 (all jobs): 00:20:58.627 READ: bw=55.0MiB/s (57.7MB/s), 2032KiB/s-3121KiB/s (2081kB/s-3196kB/s), io=554MiB (581MB), run=10001-10074msec 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:58.627 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 bdev_null0 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 [2024-07-24 19:59:25.840990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 bdev_null1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:58.628 { 00:20:58.628 "params": { 00:20:58.628 "name": "Nvme$subsystem", 00:20:58.628 "trtype": "$TEST_TRANSPORT", 00:20:58.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.628 "adrfam": "ipv4", 00:20:58.628 "trsvcid": "$NVMF_PORT", 00:20:58.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.628 "hdgst": ${hdgst:-false}, 00:20:58.628 "ddgst": ${ddgst:-false} 00:20:58.628 }, 00:20:58.628 "method": "bdev_nvme_attach_controller" 00:20:58.628 } 00:20:58.628 EOF 00:20:58.628 )") 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:58.628 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:58.628 { 00:20:58.628 "params": { 00:20:58.628 "name": "Nvme$subsystem", 00:20:58.628 "trtype": "$TEST_TRANSPORT", 00:20:58.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.628 "adrfam": "ipv4", 00:20:58.628 "trsvcid": "$NVMF_PORT", 00:20:58.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.628 "hdgst": ${hdgst:-false}, 00:20:58.628 "ddgst": ${ddgst:-false} 00:20:58.629 }, 00:20:58.629 "method": "bdev_nvme_attach_controller" 00:20:58.629 } 00:20:58.629 EOF 00:20:58.629 )") 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:58.629 "params": { 00:20:58.629 "name": "Nvme0", 00:20:58.629 "trtype": "tcp", 00:20:58.629 "traddr": "10.0.0.2", 00:20:58.629 "adrfam": "ipv4", 00:20:58.629 "trsvcid": "4420", 00:20:58.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:58.629 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:58.629 "hdgst": false, 00:20:58.629 "ddgst": false 00:20:58.629 }, 00:20:58.629 "method": "bdev_nvme_attach_controller" 00:20:58.629 },{ 00:20:58.629 "params": { 00:20:58.629 "name": "Nvme1", 00:20:58.629 "trtype": "tcp", 00:20:58.629 "traddr": "10.0.0.2", 00:20:58.629 "adrfam": "ipv4", 00:20:58.629 "trsvcid": "4420", 00:20:58.629 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.629 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.629 "hdgst": false, 00:20:58.629 "ddgst": false 00:20:58.629 }, 00:20:58.629 "method": "bdev_nvme_attach_controller" 00:20:58.629 }' 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:58.629 19:59:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.629 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:58.629 ... 00:20:58.629 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:58.629 ... 00:20:58.629 fio-3.35 00:20:58.629 Starting 4 threads 00:21:03.976 00:21:03.976 filename0: (groupid=0, jobs=1): err= 0: pid=82934: Wed Jul 24 19:59:31 2024 00:21:03.976 read: IOPS=1963, BW=15.3MiB/s (16.1MB/s)(76.7MiB/5003msec) 00:21:03.976 slat (nsec): min=7647, max=59330, avg=12157.53, stdev=4188.44 00:21:03.976 clat (usec): min=1644, max=10598, avg=4041.44, stdev=929.96 00:21:03.976 lat (usec): min=1658, max=10607, avg=4053.59, stdev=929.99 00:21:03.976 clat percentiles (usec): 00:21:03.976 | 1.00th=[ 3261], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3392], 00:21:03.976 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3654], 00:21:03.976 | 70.00th=[ 4359], 80.00th=[ 5080], 90.00th=[ 5342], 95.00th=[ 5473], 00:21:03.976 | 99.00th=[ 6849], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 9634], 00:21:03.976 | 99.99th=[10552] 00:21:03.976 bw ( KiB/s): min=13760, max=16272, per=24.76%, avg=15644.22, stdev=837.51, samples=9 00:21:03.976 iops : min= 1720, max= 2034, avg=1955.44, stdev=104.72, samples=9 00:21:03.976 lat (msec) : 2=0.14%, 4=67.84%, 10=32.00%, 20=0.02% 00:21:03.976 cpu : usr=91.16%, sys=7.90%, ctx=9, majf=0, minf=0 00:21:03.976 IO depths : 1=0.1%, 2=0.5%, 4=71.4%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.976 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.976 issued rwts: total=9823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.976 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:03.976 filename0: (groupid=0, jobs=1): err= 0: pid=82935: Wed Jul 24 19:59:31 2024 00:21:03.976 read: IOPS=1977, BW=15.5MiB/s (16.2MB/s)(77.3MiB/5001msec) 00:21:03.976 slat (nsec): min=7783, max=49541, avg=14741.18, stdev=5046.09 00:21:03.976 clat (usec): min=989, max=10575, avg=4003.68, stdev=919.64 00:21:03.976 lat (usec): min=997, max=10584, avg=4018.42, stdev=917.47 00:21:03.976 clat percentiles (usec): 00:21:03.976 | 1.00th=[ 3097], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3392], 00:21:03.976 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3621], 00:21:03.976 | 70.00th=[ 4359], 80.00th=[ 5014], 90.00th=[ 5276], 95.00th=[ 5407], 00:21:03.976 | 99.00th=[ 6849], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 9765], 00:21:03.976 | 99.99th=[10552] 00:21:03.976 bw ( KiB/s): min=13824, max=16576, per=24.95%, avg=15759.78, stdev=880.99, samples=9 00:21:03.976 iops : min= 1728, max= 2072, avg=1969.89, stdev=110.18, samples=9 00:21:03.976 lat (usec) : 1000=0.02% 00:21:03.976 lat (msec) : 2=0.21%, 4=68.54%, 10=31.21%, 20=0.02% 00:21:03.976 cpu : usr=89.72%, sys=9.02%, ctx=60, majf=0, minf=0 00:21:03.976 IO depths : 1=0.1%, 2=0.2%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.976 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.976 issued rwts: total=9891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.976 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:03.976 filename1: (groupid=0, jobs=1): err= 0: pid=82936: Wed Jul 24 19:59:31 2024 00:21:03.976 read: IOPS=1979, BW=15.5MiB/s (16.2MB/s)(77.4MiB/5003msec) 00:21:03.976 slat (nsec): min=7379, max=62333, avg=9614.13, stdev=2531.61 00:21:03.976 clat (usec): min=1588, max=10618, avg=4014.69, stdev=911.38 00:21:03.976 lat (usec): min=1603, max=10626, avg=4024.31, stdev=910.82 00:21:03.976 clat percentiles (usec): 00:21:03.976 | 1.00th=[ 3130], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3392], 00:21:03.976 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3621], 00:21:03.976 | 70.00th=[ 4359], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 5407], 00:21:03.976 | 99.00th=[ 6783], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 9634], 00:21:03.976 | 99.99th=[10683] 00:21:03.976 bw ( KiB/s): min=13760, max=16560, per=24.99%, avg=15786.67, stdev=882.61, samples=9 00:21:03.976 iops : min= 1720, max= 2070, avg=1973.33, stdev=110.33, samples=9 00:21:03.976 lat (msec) : 2=0.13%, 4=68.84%, 10=31.01%, 20=0.02% 00:21:03.976 cpu : usr=91.42%, sys=7.70%, ctx=7, majf=0, minf=9 00:21:03.976 IO depths : 1=0.1%, 2=0.3%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.976 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.976 issued rwts: total=9901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.976 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:03.976 filename1: (groupid=0, jobs=1): err= 0: pid=82937: Wed Jul 24 19:59:31 2024 00:21:03.976 read: IOPS=1977, BW=15.4MiB/s (16.2MB/s)(77.3MiB/5002msec) 00:21:03.976 slat (usec): min=7, max=104, avg=14.47, stdev= 4.07 00:21:03.976 clat (usec): min=1617, max=10584, avg=4007.45, stdev=909.04 00:21:03.976 lat (usec): min=1631, max=10602, avg=4021.92, stdev=908.61 00:21:03.976 clat percentiles (usec): 00:21:03.976 | 1.00th=[ 3163], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3392], 00:21:03.976 | 30.00th=[ 3425], 40.00th=[ 3458], 50.00th=[ 3490], 60.00th=[ 3621], 00:21:03.976 | 70.00th=[ 4359], 80.00th=[ 5014], 90.00th=[ 5276], 95.00th=[ 5407], 00:21:03.976 | 99.00th=[ 6783], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 9634], 00:21:03.976 | 99.99th=[10552] 00:21:03.976 bw ( KiB/s): min=13824, max=16560, per=24.96%, avg=15768.89, stdev=875.92, samples=9 00:21:03.976 iops : min= 1728, max= 2070, avg=1971.11, stdev=109.49, samples=9 00:21:03.976 lat (msec) : 2=0.13%, 4=68.65%, 10=31.20%, 20=0.02% 00:21:03.976 cpu : usr=91.66%, sys=7.40%, ctx=19, majf=0, minf=0 00:21:03.976 IO depths : 1=0.1%, 2=0.2%, 4=71.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:03.976 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.976 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:03.976 issued rwts: total=9891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:03.976 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:03.976 00:21:03.976 Run status group 0 (all jobs): 00:21:03.976 READ: bw=61.7MiB/s (64.7MB/s), 15.3MiB/s-15.5MiB/s (16.1MB/s-16.2MB/s), io=309MiB (324MB), run=5001-5003msec 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:03.976 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 ************************************ 00:21:03.977 END TEST fio_dif_rand_params 00:21:03.977 ************************************ 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.977 00:21:03.977 real 0m33.217s 00:21:03.977 user 3m29.462s 00:21:03.977 sys 0m8.890s 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.977 19:59:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 19:59:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:03.977 19:59:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:03.977 19:59:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:03.977 19:59:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 ************************************ 00:21:03.977 START TEST fio_dif_digest 00:21:03.977 ************************************ 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 bdev_null0 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:03.977 [2024-07-24 19:59:32.039799] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.977 { 00:21:03.977 "params": { 00:21:03.977 "name": "Nvme$subsystem", 00:21:03.977 "trtype": "$TEST_TRANSPORT", 00:21:03.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.977 "adrfam": "ipv4", 00:21:03.977 "trsvcid": "$NVMF_PORT", 00:21:03.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.977 "hdgst": ${hdgst:-false}, 00:21:03.977 "ddgst": ${ddgst:-false} 00:21:03.977 }, 00:21:03.977 "method": "bdev_nvme_attach_controller" 00:21:03.977 } 00:21:03.977 EOF 00:21:03.977 )") 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:03.977 "params": { 00:21:03.977 "name": "Nvme0", 00:21:03.977 "trtype": "tcp", 00:21:03.977 "traddr": "10.0.0.2", 00:21:03.977 "adrfam": "ipv4", 00:21:03.977 "trsvcid": "4420", 00:21:03.977 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.977 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:03.977 "hdgst": true, 00:21:03.977 "ddgst": true 00:21:03.977 }, 00:21:03.977 "method": "bdev_nvme_attach_controller" 00:21:03.977 }' 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:03.977 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:03.978 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:03.978 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:03.978 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:03.978 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:03.978 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:03.978 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:03.978 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:03.978 19:59:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:03.978 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:03.978 ... 00:21:03.978 fio-3.35 00:21:03.978 Starting 3 threads 00:21:16.205 00:21:16.205 filename0: (groupid=0, jobs=1): err= 0: pid=83043: Wed Jul 24 19:59:42 2024 00:21:16.205 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(278MiB/10002msec) 00:21:16.205 slat (nsec): min=7498, max=54868, avg=13730.28, stdev=7475.29 00:21:16.205 clat (usec): min=11547, max=15425, avg=13461.71, stdev=134.81 00:21:16.205 lat (usec): min=11556, max=15459, avg=13475.44, stdev=135.40 00:21:16.205 clat percentiles (usec): 00:21:16.205 | 1.00th=[13304], 5.00th=[13435], 10.00th=[13435], 20.00th=[13435], 00:21:16.205 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:21:16.205 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13566], 00:21:16.205 | 99.00th=[13960], 99.50th=[13960], 99.90th=[15401], 99.95th=[15401], 00:21:16.205 | 99.99th=[15401] 00:21:16.205 bw ( KiB/s): min=27648, max=29184, per=33.34%, avg=28456.42, stdev=310.77, samples=19 00:21:16.205 iops : min= 216, max= 228, avg=222.32, stdev= 2.43, samples=19 00:21:16.205 lat (msec) : 20=100.00% 00:21:16.205 cpu : usr=94.71%, sys=4.70%, ctx=24, majf=0, minf=0 00:21:16.205 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:16.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.205 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.205 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:16.205 filename0: (groupid=0, jobs=1): err= 0: pid=83044: Wed Jul 24 19:59:42 2024 00:21:16.205 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(278MiB/10001msec) 00:21:16.205 slat (nsec): min=7450, max=38469, avg=10946.73, stdev=4041.34 00:21:16.205 clat (usec): min=9990, max=15998, avg=13468.58, stdev=181.41 00:21:16.205 lat (usec): min=9998, max=16024, avg=13479.53, stdev=181.69 00:21:16.205 clat percentiles (usec): 00:21:16.205 | 1.00th=[13304], 5.00th=[13435], 10.00th=[13435], 20.00th=[13435], 00:21:16.205 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:21:16.205 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13566], 95.00th=[13566], 00:21:16.205 | 99.00th=[13960], 99.50th=[13960], 99.90th=[15926], 99.95th=[15926], 00:21:16.205 | 99.99th=[16057] 00:21:16.205 bw ( KiB/s): min=27703, max=29184, per=33.35%, avg=28459.32, stdev=302.98, samples=19 00:21:16.205 iops : min= 216, max= 228, avg=222.32, stdev= 2.43, samples=19 00:21:16.205 lat (msec) : 10=0.09%, 20=99.91% 00:21:16.205 cpu : usr=94.31%, sys=5.11%, ctx=11, majf=0, minf=9 00:21:16.205 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:16.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.205 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.205 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:16.205 filename0: (groupid=0, jobs=1): err= 0: pid=83045: Wed Jul 24 19:59:42 2024 00:21:16.205 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(278MiB/10002msec) 00:21:16.205 slat (nsec): min=7870, max=58120, avg=14631.69, stdev=7641.52 00:21:16.205 clat (usec): min=13284, max=14105, avg=13459.85, stdev=90.07 00:21:16.205 lat (usec): min=13293, max=14130, avg=13474.48, stdev=91.22 00:21:16.205 clat percentiles (usec): 00:21:16.205 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13435], 20.00th=[13435], 00:21:16.205 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:21:16.205 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13566], 95.00th=[13566], 00:21:16.205 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14091], 99.95th=[14091], 00:21:16.205 | 99.99th=[14091] 00:21:16.205 bw ( KiB/s): min=27648, max=29184, per=33.34%, avg=28456.42, stdev=310.77, samples=19 00:21:16.205 iops : min= 216, max= 228, avg=222.32, stdev= 2.43, samples=19 00:21:16.205 lat (msec) : 20=100.00% 00:21:16.205 cpu : usr=94.24%, sys=5.20%, ctx=19, majf=0, minf=0 00:21:16.205 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:16.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:16.206 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:16.206 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:16.206 00:21:16.206 Run status group 0 (all jobs): 00:21:16.206 READ: bw=83.3MiB/s (87.4MB/s), 27.8MiB/s-27.8MiB/s (29.1MB/s-29.1MB/s), io=834MiB (874MB), run=10001-10002msec 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:16.206 ************************************ 00:21:16.206 END TEST fio_dif_digest 00:21:16.206 ************************************ 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.206 00:21:16.206 real 0m11.048s 00:21:16.206 user 0m28.992s 00:21:16.206 sys 0m1.812s 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:16.206 19:59:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:16.206 19:59:43 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:16.206 19:59:43 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:16.206 rmmod nvme_tcp 00:21:16.206 rmmod nvme_fabrics 00:21:16.206 rmmod nvme_keyring 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 82209 ']' 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 82209 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 82209 ']' 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 82209 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82209 00:21:16.206 killing process with pid 82209 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82209' 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@969 -- # kill 82209 00:21:16.206 19:59:43 nvmf_dif -- common/autotest_common.sh@974 -- # wait 82209 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:16.206 19:59:43 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:16.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:16.206 Waiting for block devices as requested 00:21:16.206 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:16.206 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:16.206 19:59:44 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:16.206 19:59:44 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:16.206 19:59:44 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.206 19:59:44 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:16.206 19:59:44 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.206 19:59:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:16.206 19:59:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.206 19:59:44 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:16.206 00:21:16.206 real 1m9.581s 00:21:16.206 user 5m21.476s 00:21:16.206 sys 0m19.980s 00:21:16.206 19:59:44 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:16.206 19:59:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:16.206 ************************************ 00:21:16.206 END TEST nvmf_dif 00:21:16.206 ************************************ 00:21:16.206 19:59:44 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:16.206 19:59:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:16.206 19:59:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:16.206 19:59:44 -- common/autotest_common.sh@10 -- # set +x 00:21:16.206 ************************************ 00:21:16.206 START TEST nvmf_abort_qd_sizes 00:21:16.206 ************************************ 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:16.206 * Looking for test storage... 00:21:16.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.206 19:59:44 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:16.207 Cannot find device "nvmf_tgt_br" 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.207 Cannot find device "nvmf_tgt_br2" 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:16.207 Cannot find device "nvmf_tgt_br" 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:16.207 Cannot find device "nvmf_tgt_br2" 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.207 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:16.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:21:16.207 00:21:16.207 --- 10.0.0.2 ping statistics --- 00:21:16.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.207 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:16.207 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:16.207 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:21:16.207 00:21:16.207 --- 10.0.0.3 ping statistics --- 00:21:16.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.207 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:16.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:16.207 00:21:16.207 --- 10.0.0.1 ping statistics --- 00:21:16.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.207 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:16.207 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:16.208 19:59:44 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:16.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:16.774 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:16.774 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:16.774 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.774 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:16.774 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:16.774 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.774 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:16.774 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=83645 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 83645 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 83645 ']' 00:21:17.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:17.077 19:59:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:17.077 [2024-07-24 19:59:45.525708] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:21:17.077 [2024-07-24 19:59:45.526977] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.077 [2024-07-24 19:59:45.675284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:17.335 [2024-07-24 19:59:45.804625] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.335 [2024-07-24 19:59:45.804975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.335 [2024-07-24 19:59:45.805252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.335 [2024-07-24 19:59:45.805575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.335 [2024-07-24 19:59:45.805719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.335 [2024-07-24 19:59:45.806057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.335 [2024-07-24 19:59:45.806127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.335 [2024-07-24 19:59:45.806262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:17.335 [2024-07-24 19:59:45.806273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.335 [2024-07-24 19:59:45.862783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:17.902 19:59:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:17.902 ************************************ 00:21:17.902 START TEST spdk_target_abort 00:21:17.902 ************************************ 00:21:17.902 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:21:17.902 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:17.902 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:17.902 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.902 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:18.160 spdk_targetn1 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:18.160 [2024-07-24 19:59:46.644342] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.160 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:18.161 [2024-07-24 19:59:46.672534] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:18.161 19:59:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:21.441 Initializing NVMe Controllers 00:21:21.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:21.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:21.441 Initialization complete. Launching workers. 00:21:21.441 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9699, failed: 0 00:21:21.441 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1083, failed to submit 8616 00:21:21.441 success 664, unsuccess 419, failed 0 00:21:21.441 19:59:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:21.442 19:59:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:24.723 Initializing NVMe Controllers 00:21:24.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:24.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:24.723 Initialization complete. Launching workers. 00:21:24.723 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8588, failed: 0 00:21:24.723 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1247, failed to submit 7341 00:21:24.723 success 367, unsuccess 880, failed 0 00:21:24.723 19:59:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:24.723 19:59:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:28.006 Initializing NVMe Controllers 00:21:28.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:28.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:28.006 Initialization complete. Launching workers. 00:21:28.006 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30900, failed: 0 00:21:28.006 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2311, failed to submit 28589 00:21:28.006 success 411, unsuccess 1900, failed 0 00:21:28.006 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:28.006 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.006 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:28.006 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.006 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:28.006 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.006 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 83645 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 83645 ']' 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 83645 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83645 00:21:28.572 killing process with pid 83645 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83645' 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 83645 00:21:28.572 19:59:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 83645 00:21:28.572 ************************************ 00:21:28.572 END TEST spdk_target_abort 00:21:28.572 ************************************ 00:21:28.572 00:21:28.572 real 0m10.683s 00:21:28.572 user 0m43.137s 00:21:28.572 sys 0m2.029s 00:21:28.572 19:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:28.572 19:59:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:28.830 19:59:57 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:28.830 19:59:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:28.830 19:59:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:28.830 19:59:57 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:28.830 ************************************ 00:21:28.830 START TEST kernel_target_abort 00:21:28.830 ************************************ 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:28.830 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:28.831 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:28.831 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:28.831 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:28.831 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:28.831 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:28.831 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:28.831 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:29.088 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:29.088 Waiting for block devices as requested 00:21:29.088 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:29.346 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:29.346 No valid GPT data, bailing 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:29.346 19:59:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:29.605 No valid GPT data, bailing 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:29.605 No valid GPT data, bailing 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:29.605 No valid GPT data, bailing 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:29.605 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de --hostid=69cdc0e8-4c23-4318-834b-1d87efff05de -a 10.0.0.1 -t tcp -s 4420 00:21:29.606 00:21:29.606 Discovery Log Number of Records 2, Generation counter 2 00:21:29.606 =====Discovery Log Entry 0====== 00:21:29.606 trtype: tcp 00:21:29.606 adrfam: ipv4 00:21:29.606 subtype: current discovery subsystem 00:21:29.606 treq: not specified, sq flow control disable supported 00:21:29.606 portid: 1 00:21:29.606 trsvcid: 4420 00:21:29.606 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:29.606 traddr: 10.0.0.1 00:21:29.606 eflags: none 00:21:29.606 sectype: none 00:21:29.606 =====Discovery Log Entry 1====== 00:21:29.606 trtype: tcp 00:21:29.606 adrfam: ipv4 00:21:29.606 subtype: nvme subsystem 00:21:29.606 treq: not specified, sq flow control disable supported 00:21:29.606 portid: 1 00:21:29.606 trsvcid: 4420 00:21:29.606 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:29.606 traddr: 10.0.0.1 00:21:29.606 eflags: none 00:21:29.606 sectype: none 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:29.606 19:59:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:32.900 Initializing NVMe Controllers 00:21:32.900 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:32.900 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:32.900 Initialization complete. Launching workers. 00:21:32.900 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33564, failed: 0 00:21:32.900 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33564, failed to submit 0 00:21:32.900 success 0, unsuccess 33564, failed 0 00:21:32.900 20:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:32.900 20:00:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:36.171 Initializing NVMe Controllers 00:21:36.171 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:36.171 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:36.171 Initialization complete. Launching workers. 00:21:36.171 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71633, failed: 0 00:21:36.171 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31276, failed to submit 40357 00:21:36.171 success 0, unsuccess 31276, failed 0 00:21:36.171 20:00:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:36.171 20:00:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:39.447 Initializing NVMe Controllers 00:21:39.447 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:39.447 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:39.447 Initialization complete. Launching workers. 00:21:39.447 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84653, failed: 0 00:21:39.447 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21172, failed to submit 63481 00:21:39.447 success 0, unsuccess 21172, failed 0 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:39.447 20:00:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:40.014 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:41.918 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:41.918 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:42.174 00:21:42.174 real 0m13.325s 00:21:42.174 user 0m6.217s 00:21:42.174 sys 0m4.472s 00:21:42.174 20:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.174 ************************************ 00:21:42.174 END TEST kernel_target_abort 00:21:42.174 20:00:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:42.175 ************************************ 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.175 rmmod nvme_tcp 00:21:42.175 rmmod nvme_fabrics 00:21:42.175 rmmod nvme_keyring 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 83645 ']' 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 83645 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 83645 ']' 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 83645 00:21:42.175 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (83645) - No such process 00:21:42.175 Process with pid 83645 is not found 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 83645 is not found' 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:42.175 20:00:10 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:42.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:42.689 Waiting for block devices as requested 00:21:42.689 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:42.689 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:42.689 00:21:42.689 real 0m27.188s 00:21:42.689 user 0m50.477s 00:21:42.689 sys 0m7.840s 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.689 ************************************ 00:21:42.689 END TEST nvmf_abort_qd_sizes 00:21:42.689 ************************************ 00:21:42.689 20:00:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:43.021 20:00:11 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:43.021 20:00:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:43.021 20:00:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.021 20:00:11 -- common/autotest_common.sh@10 -- # set +x 00:21:43.021 ************************************ 00:21:43.021 START TEST keyring_file 00:21:43.021 ************************************ 00:21:43.021 20:00:11 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:43.021 * Looking for test storage... 00:21:43.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:43.021 20:00:11 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:43.021 20:00:11 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:43.021 20:00:11 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.021 20:00:11 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.021 20:00:11 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.021 20:00:11 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.021 20:00:11 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.021 20:00:11 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.021 20:00:11 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:43.021 20:00:11 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.021 20:00:11 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HMR88rG0Pf 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HMR88rG0Pf 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HMR88rG0Pf 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HMR88rG0Pf 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eVhG1hSlVg 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:43.022 20:00:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eVhG1hSlVg 00:21:43.022 20:00:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eVhG1hSlVg 00:21:43.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eVhG1hSlVg 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=84515 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:43.022 20:00:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84515 00:21:43.022 20:00:11 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84515 ']' 00:21:43.022 20:00:11 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.022 20:00:11 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.022 20:00:11 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.022 20:00:11 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.022 20:00:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:43.280 [2024-07-24 20:00:11.687332] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:21:43.280 [2024-07-24 20:00:11.687670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84515 ] 00:21:43.280 [2024-07-24 20:00:11.830015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.538 [2024-07-24 20:00:11.961456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.538 [2024-07-24 20:00:12.021090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:44.107 20:00:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:44.107 [2024-07-24 20:00:12.660917] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.107 null0 00:21:44.107 [2024-07-24 20:00:12.692870] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.107 [2024-07-24 20:00:12.693275] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:44.107 [2024-07-24 20:00:12.700872] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.107 20:00:12 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:44.107 [2024-07-24 20:00:12.712871] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:44.107 request: 00:21:44.107 { 00:21:44.107 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.107 "secure_channel": false, 00:21:44.107 "listen_address": { 00:21:44.107 "trtype": "tcp", 00:21:44.107 "traddr": "127.0.0.1", 00:21:44.107 "trsvcid": "4420" 00:21:44.107 }, 00:21:44.107 "method": "nvmf_subsystem_add_listener", 00:21:44.107 "req_id": 1 00:21:44.107 } 00:21:44.107 Got JSON-RPC error response 00:21:44.107 response: 00:21:44.107 { 00:21:44.107 "code": -32602, 00:21:44.107 "message": "Invalid parameters" 00:21:44.107 } 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:44.107 20:00:12 keyring_file -- keyring/file.sh@46 -- # bperfpid=84532 00:21:44.107 20:00:12 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:44.107 20:00:12 keyring_file -- keyring/file.sh@48 -- # waitforlisten 84532 /var/tmp/bperf.sock 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84532 ']' 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:44.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.107 20:00:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:44.107 [2024-07-24 20:00:12.775412] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:21:44.107 [2024-07-24 20:00:12.775769] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84532 ] 00:21:44.364 [2024-07-24 20:00:12.917576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.622 [2024-07-24 20:00:13.045747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.622 [2024-07-24 20:00:13.102721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:45.186 20:00:13 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.186 20:00:13 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:45.186 20:00:13 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMR88rG0Pf 00:21:45.186 20:00:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HMR88rG0Pf 00:21:45.751 20:00:14 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eVhG1hSlVg 00:21:45.751 20:00:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eVhG1hSlVg 00:21:45.751 20:00:14 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:45.751 20:00:14 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:45.751 20:00:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.751 20:00:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.751 20:00:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:46.314 20:00:14 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.HMR88rG0Pf == \/\t\m\p\/\t\m\p\.\H\M\R\8\8\r\G\0\P\f ]] 00:21:46.314 20:00:14 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:46.314 20:00:14 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:46.314 20:00:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:46.314 20:00:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.314 20:00:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:46.572 20:00:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eVhG1hSlVg == \/\t\m\p\/\t\m\p\.\e\V\h\G\1\h\S\l\V\g ]] 00:21:46.572 20:00:15 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:46.572 20:00:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:46.572 20:00:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:46.572 20:00:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:46.572 20:00:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.572 20:00:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:46.829 20:00:15 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:46.829 20:00:15 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:46.829 20:00:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:46.829 20:00:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:46.829 20:00:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:46.829 20:00:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:46.829 20:00:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.088 20:00:15 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:47.088 20:00:15 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:47.088 20:00:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:47.346 [2024-07-24 20:00:15.896331] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.346 nvme0n1 00:21:47.346 20:00:15 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:47.346 20:00:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.346 20:00:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:47.346 20:00:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.346 20:00:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.346 20:00:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.914 20:00:16 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:47.914 20:00:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:47.914 20:00:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:47.914 20:00:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.914 20:00:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.914 20:00:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.914 20:00:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:47.914 20:00:16 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:47.914 20:00:16 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.172 Running I/O for 1 seconds... 00:21:49.105 00:21:49.105 Latency(us) 00:21:49.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.105 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:49.105 nvme0n1 : 1.01 10775.16 42.09 0.00 0.00 11835.13 5272.67 19779.96 00:21:49.105 =================================================================================================================== 00:21:49.105 Total : 10775.16 42.09 0.00 0.00 11835.13 5272.67 19779.96 00:21:49.105 0 00:21:49.105 20:00:17 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:49.106 20:00:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:49.364 20:00:18 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:49.364 20:00:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:49.364 20:00:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:49.364 20:00:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:49.364 20:00:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.364 20:00:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:49.928 20:00:18 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:49.928 20:00:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:49.928 20:00:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:49.928 20:00:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:49.928 20:00:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:49.928 20:00:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.928 20:00:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:49.928 20:00:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:49.928 20:00:18 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:49.928 20:00:18 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:49.928 20:00:18 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:49.928 20:00:18 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:49.928 20:00:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.928 20:00:18 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:49.928 20:00:18 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.928 20:00:18 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:49.928 20:00:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:50.495 [2024-07-24 20:00:18.959682] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:50.495 [2024-07-24 20:00:18.960134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4a4f0 (107): Transport endpoint is not connected 00:21:50.495 [2024-07-24 20:00:18.961117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4a4f0 (9): Bad file descriptor 00:21:50.495 [2024-07-24 20:00:18.962113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:50.495 [2024-07-24 20:00:18.962139] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:50.495 [2024-07-24 20:00:18.962151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:50.495 request: 00:21:50.495 { 00:21:50.495 "name": "nvme0", 00:21:50.495 "trtype": "tcp", 00:21:50.495 "traddr": "127.0.0.1", 00:21:50.495 "adrfam": "ipv4", 00:21:50.495 "trsvcid": "4420", 00:21:50.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:50.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:50.495 "prchk_reftag": false, 00:21:50.495 "prchk_guard": false, 00:21:50.495 "hdgst": false, 00:21:50.495 "ddgst": false, 00:21:50.495 "psk": "key1", 00:21:50.495 "method": "bdev_nvme_attach_controller", 00:21:50.495 "req_id": 1 00:21:50.495 } 00:21:50.495 Got JSON-RPC error response 00:21:50.495 response: 00:21:50.495 { 00:21:50.495 "code": -5, 00:21:50.495 "message": "Input/output error" 00:21:50.495 } 00:21:50.495 20:00:18 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:50.495 20:00:18 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:50.495 20:00:18 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:50.495 20:00:18 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:50.495 20:00:18 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:50.495 20:00:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:50.495 20:00:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:50.495 20:00:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.495 20:00:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.496 20:00:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:50.754 20:00:19 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:50.754 20:00:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:50.754 20:00:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:50.754 20:00:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:50.754 20:00:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.754 20:00:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.754 20:00:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:51.011 20:00:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:51.011 20:00:19 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:51.011 20:00:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:51.576 20:00:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:51.576 20:00:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:51.833 20:00:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:51.833 20:00:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.833 20:00:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:52.092 20:00:20 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:52.092 20:00:20 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.HMR88rG0Pf 00:21:52.092 20:00:20 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMR88rG0Pf 00:21:52.092 20:00:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:52.092 20:00:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMR88rG0Pf 00:21:52.092 20:00:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:52.092 20:00:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.092 20:00:20 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:52.092 20:00:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:52.092 20:00:20 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMR88rG0Pf 00:21:52.092 20:00:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HMR88rG0Pf 00:21:52.349 [2024-07-24 20:00:20.911049] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HMR88rG0Pf': 0100660 00:21:52.350 [2024-07-24 20:00:20.911140] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:52.350 request: 00:21:52.350 { 00:21:52.350 "name": "key0", 00:21:52.350 "path": "/tmp/tmp.HMR88rG0Pf", 00:21:52.350 "method": "keyring_file_add_key", 00:21:52.350 "req_id": 1 00:21:52.350 } 00:21:52.350 Got JSON-RPC error response 00:21:52.350 response: 00:21:52.350 { 00:21:52.350 "code": -1, 00:21:52.350 "message": "Operation not permitted" 00:21:52.350 } 00:21:52.350 20:00:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:52.350 20:00:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.350 20:00:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.350 20:00:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.350 20:00:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.HMR88rG0Pf 00:21:52.350 20:00:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HMR88rG0Pf 00:21:52.350 20:00:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HMR88rG0Pf 00:21:52.607 20:00:21 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.HMR88rG0Pf 00:21:52.607 20:00:21 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:52.607 20:00:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:52.607 20:00:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:52.607 20:00:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:52.607 20:00:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.607 20:00:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:53.184 20:00:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:53.184 20:00:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.184 20:00:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:53.184 20:00:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.184 20:00:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:53.184 20:00:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.184 20:00:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:53.184 20:00:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.184 20:00:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.184 20:00:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.442 [2024-07-24 20:00:21.855259] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HMR88rG0Pf': No such file or directory 00:21:53.442 [2024-07-24 20:00:21.855338] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:53.442 [2024-07-24 20:00:21.855371] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:53.442 [2024-07-24 20:00:21.855383] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.442 [2024-07-24 20:00:21.855396] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:53.442 request: 00:21:53.442 { 00:21:53.442 "name": "nvme0", 00:21:53.442 "trtype": "tcp", 00:21:53.442 "traddr": "127.0.0.1", 00:21:53.442 "adrfam": "ipv4", 00:21:53.442 "trsvcid": "4420", 00:21:53.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:53.442 "prchk_reftag": false, 00:21:53.442 "prchk_guard": false, 00:21:53.442 "hdgst": false, 00:21:53.442 "ddgst": false, 00:21:53.442 "psk": "key0", 00:21:53.442 "method": "bdev_nvme_attach_controller", 00:21:53.442 "req_id": 1 00:21:53.442 } 00:21:53.442 Got JSON-RPC error response 00:21:53.442 response: 00:21:53.442 { 00:21:53.442 "code": -19, 00:21:53.442 "message": "No such device" 00:21:53.442 } 00:21:53.442 20:00:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:53.442 20:00:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.442 20:00:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.442 20:00:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.442 20:00:21 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:53.442 20:00:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:53.699 20:00:22 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ufXVMwCbGU 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:53.699 20:00:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:53.699 20:00:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:53.699 20:00:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:53.699 20:00:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:53.699 20:00:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:53.699 20:00:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ufXVMwCbGU 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ufXVMwCbGU 00:21:53.699 20:00:22 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.ufXVMwCbGU 00:21:53.699 20:00:22 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ufXVMwCbGU 00:21:53.699 20:00:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ufXVMwCbGU 00:21:53.957 20:00:22 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.957 20:00:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:54.535 nvme0n1 00:21:54.535 20:00:22 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:54.535 20:00:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:54.535 20:00:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.535 20:00:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.535 20:00:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.535 20:00:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:54.792 20:00:23 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:54.792 20:00:23 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:54.792 20:00:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:55.050 20:00:23 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:55.050 20:00:23 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:55.050 20:00:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.050 20:00:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.050 20:00:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.050 20:00:23 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:55.050 20:00:23 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:55.050 20:00:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:55.050 20:00:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.307 20:00:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.307 20:00:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.307 20:00:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:55.307 20:00:23 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:55.307 20:00:23 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:55.307 20:00:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:55.564 20:00:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:55.564 20:00:24 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:55.564 20:00:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.822 20:00:24 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:55.822 20:00:24 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ufXVMwCbGU 00:21:55.822 20:00:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ufXVMwCbGU 00:21:56.080 20:00:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eVhG1hSlVg 00:21:56.080 20:00:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eVhG1hSlVg 00:21:56.338 20:00:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:56.338 20:00:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:56.596 nvme0n1 00:21:56.596 20:00:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:56.596 20:00:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:56.853 20:00:25 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:56.853 "subsystems": [ 00:21:56.853 { 00:21:56.853 "subsystem": "keyring", 00:21:56.853 "config": [ 00:21:56.853 { 00:21:56.853 "method": "keyring_file_add_key", 00:21:56.853 "params": { 00:21:56.853 "name": "key0", 00:21:56.853 "path": "/tmp/tmp.ufXVMwCbGU" 00:21:56.853 } 00:21:56.853 }, 00:21:56.853 { 00:21:56.853 "method": "keyring_file_add_key", 00:21:56.853 "params": { 00:21:56.853 "name": "key1", 00:21:56.853 "path": "/tmp/tmp.eVhG1hSlVg" 00:21:56.853 } 00:21:56.853 } 00:21:56.853 ] 00:21:56.853 }, 00:21:56.853 { 00:21:56.853 "subsystem": "iobuf", 00:21:56.853 "config": [ 00:21:56.853 { 00:21:56.853 "method": "iobuf_set_options", 00:21:56.853 "params": { 00:21:56.853 "small_pool_count": 8192, 00:21:56.854 "large_pool_count": 1024, 00:21:56.854 "small_bufsize": 8192, 00:21:56.854 "large_bufsize": 135168 00:21:56.854 } 00:21:56.854 } 00:21:56.854 ] 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "subsystem": "sock", 00:21:56.854 "config": [ 00:21:56.854 { 00:21:56.854 "method": "sock_set_default_impl", 00:21:56.854 "params": { 00:21:56.854 "impl_name": "uring" 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "sock_impl_set_options", 00:21:56.854 "params": { 00:21:56.854 "impl_name": "ssl", 00:21:56.854 "recv_buf_size": 4096, 00:21:56.854 "send_buf_size": 4096, 00:21:56.854 "enable_recv_pipe": true, 00:21:56.854 "enable_quickack": false, 00:21:56.854 "enable_placement_id": 0, 00:21:56.854 "enable_zerocopy_send_server": true, 00:21:56.854 "enable_zerocopy_send_client": false, 00:21:56.854 "zerocopy_threshold": 0, 00:21:56.854 "tls_version": 0, 00:21:56.854 "enable_ktls": false 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "sock_impl_set_options", 00:21:56.854 "params": { 00:21:56.854 "impl_name": "posix", 00:21:56.854 "recv_buf_size": 2097152, 00:21:56.854 "send_buf_size": 2097152, 00:21:56.854 "enable_recv_pipe": true, 00:21:56.854 "enable_quickack": false, 00:21:56.854 "enable_placement_id": 0, 00:21:56.854 "enable_zerocopy_send_server": true, 00:21:56.854 "enable_zerocopy_send_client": false, 00:21:56.854 "zerocopy_threshold": 0, 00:21:56.854 "tls_version": 0, 00:21:56.854 "enable_ktls": false 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "sock_impl_set_options", 00:21:56.854 "params": { 00:21:56.854 "impl_name": "uring", 00:21:56.854 "recv_buf_size": 2097152, 00:21:56.854 "send_buf_size": 2097152, 00:21:56.854 "enable_recv_pipe": true, 00:21:56.854 "enable_quickack": false, 00:21:56.854 "enable_placement_id": 0, 00:21:56.854 "enable_zerocopy_send_server": false, 00:21:56.854 "enable_zerocopy_send_client": false, 00:21:56.854 "zerocopy_threshold": 0, 00:21:56.854 "tls_version": 0, 00:21:56.854 "enable_ktls": false 00:21:56.854 } 00:21:56.854 } 00:21:56.854 ] 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "subsystem": "vmd", 00:21:56.854 "config": [] 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "subsystem": "accel", 00:21:56.854 "config": [ 00:21:56.854 { 00:21:56.854 "method": "accel_set_options", 00:21:56.854 "params": { 00:21:56.854 "small_cache_size": 128, 00:21:56.854 "large_cache_size": 16, 00:21:56.854 "task_count": 2048, 00:21:56.854 "sequence_count": 2048, 00:21:56.854 "buf_count": 2048 00:21:56.854 } 00:21:56.854 } 00:21:56.854 ] 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "subsystem": "bdev", 00:21:56.854 "config": [ 00:21:56.854 { 00:21:56.854 "method": "bdev_set_options", 00:21:56.854 "params": { 00:21:56.854 "bdev_io_pool_size": 65535, 00:21:56.854 "bdev_io_cache_size": 256, 00:21:56.854 "bdev_auto_examine": true, 00:21:56.854 "iobuf_small_cache_size": 128, 00:21:56.854 "iobuf_large_cache_size": 16 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "bdev_raid_set_options", 00:21:56.854 "params": { 00:21:56.854 "process_window_size_kb": 1024, 00:21:56.854 "process_max_bandwidth_mb_sec": 0 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "bdev_iscsi_set_options", 00:21:56.854 "params": { 00:21:56.854 "timeout_sec": 30 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "bdev_nvme_set_options", 00:21:56.854 "params": { 00:21:56.854 "action_on_timeout": "none", 00:21:56.854 "timeout_us": 0, 00:21:56.854 "timeout_admin_us": 0, 00:21:56.854 "keep_alive_timeout_ms": 10000, 00:21:56.854 "arbitration_burst": 0, 00:21:56.854 "low_priority_weight": 0, 00:21:56.854 "medium_priority_weight": 0, 00:21:56.854 "high_priority_weight": 0, 00:21:56.854 "nvme_adminq_poll_period_us": 10000, 00:21:56.854 "nvme_ioq_poll_period_us": 0, 00:21:56.854 "io_queue_requests": 512, 00:21:56.854 "delay_cmd_submit": true, 00:21:56.854 "transport_retry_count": 4, 00:21:56.854 "bdev_retry_count": 3, 00:21:56.854 "transport_ack_timeout": 0, 00:21:56.854 "ctrlr_loss_timeout_sec": 0, 00:21:56.854 "reconnect_delay_sec": 0, 00:21:56.854 "fast_io_fail_timeout_sec": 0, 00:21:56.854 "disable_auto_failback": false, 00:21:56.854 "generate_uuids": false, 00:21:56.854 "transport_tos": 0, 00:21:56.854 "nvme_error_stat": false, 00:21:56.854 "rdma_srq_size": 0, 00:21:56.854 "io_path_stat": false, 00:21:56.854 "allow_accel_sequence": false, 00:21:56.854 "rdma_max_cq_size": 0, 00:21:56.854 "rdma_cm_event_timeout_ms": 0, 00:21:56.854 "dhchap_digests": [ 00:21:56.854 "sha256", 00:21:56.854 "sha384", 00:21:56.854 "sha512" 00:21:56.854 ], 00:21:56.854 "dhchap_dhgroups": [ 00:21:56.854 "null", 00:21:56.854 "ffdhe2048", 00:21:56.854 "ffdhe3072", 00:21:56.854 "ffdhe4096", 00:21:56.854 "ffdhe6144", 00:21:56.854 "ffdhe8192" 00:21:56.854 ] 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "bdev_nvme_attach_controller", 00:21:56.854 "params": { 00:21:56.854 "name": "nvme0", 00:21:56.854 "trtype": "TCP", 00:21:56.854 "adrfam": "IPv4", 00:21:56.854 "traddr": "127.0.0.1", 00:21:56.854 "trsvcid": "4420", 00:21:56.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:56.854 "prchk_reftag": false, 00:21:56.854 "prchk_guard": false, 00:21:56.854 "ctrlr_loss_timeout_sec": 0, 00:21:56.854 "reconnect_delay_sec": 0, 00:21:56.854 "fast_io_fail_timeout_sec": 0, 00:21:56.854 "psk": "key0", 00:21:56.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:56.854 "hdgst": false, 00:21:56.854 "ddgst": false 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "bdev_nvme_set_hotplug", 00:21:56.854 "params": { 00:21:56.854 "period_us": 100000, 00:21:56.854 "enable": false 00:21:56.854 } 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "method": "bdev_wait_for_examine" 00:21:56.854 } 00:21:56.854 ] 00:21:56.854 }, 00:21:56.854 { 00:21:56.854 "subsystem": "nbd", 00:21:56.854 "config": [] 00:21:56.854 } 00:21:56.854 ] 00:21:56.854 }' 00:21:56.854 20:00:25 keyring_file -- keyring/file.sh@114 -- # killprocess 84532 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84532 ']' 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84532 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84532 00:21:56.854 killing process with pid 84532 00:21:56.854 Received shutdown signal, test time was about 1.000000 seconds 00:21:56.854 00:21:56.854 Latency(us) 00:21:56.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.854 =================================================================================================================== 00:21:56.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84532' 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@969 -- # kill 84532 00:21:56.854 20:00:25 keyring_file -- common/autotest_common.sh@974 -- # wait 84532 00:21:57.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:57.181 20:00:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=84789 00:21:57.181 20:00:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84789 /var/tmp/bperf.sock 00:21:57.181 20:00:25 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 84789 ']' 00:21:57.181 20:00:25 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:57.181 20:00:25 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.181 20:00:25 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:57.181 20:00:25 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:57.181 20:00:25 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.181 20:00:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:57.181 "subsystems": [ 00:21:57.181 { 00:21:57.181 "subsystem": "keyring", 00:21:57.181 "config": [ 00:21:57.181 { 00:21:57.181 "method": "keyring_file_add_key", 00:21:57.181 "params": { 00:21:57.181 "name": "key0", 00:21:57.181 "path": "/tmp/tmp.ufXVMwCbGU" 00:21:57.181 } 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "method": "keyring_file_add_key", 00:21:57.181 "params": { 00:21:57.181 "name": "key1", 00:21:57.181 "path": "/tmp/tmp.eVhG1hSlVg" 00:21:57.181 } 00:21:57.181 } 00:21:57.181 ] 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "subsystem": "iobuf", 00:21:57.181 "config": [ 00:21:57.181 { 00:21:57.181 "method": "iobuf_set_options", 00:21:57.181 "params": { 00:21:57.181 "small_pool_count": 8192, 00:21:57.181 "large_pool_count": 1024, 00:21:57.181 "small_bufsize": 8192, 00:21:57.181 "large_bufsize": 135168 00:21:57.181 } 00:21:57.181 } 00:21:57.181 ] 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "subsystem": "sock", 00:21:57.181 "config": [ 00:21:57.181 { 00:21:57.181 "method": "sock_set_default_impl", 00:21:57.181 "params": { 00:21:57.181 "impl_name": "uring" 00:21:57.181 } 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "method": "sock_impl_set_options", 00:21:57.181 "params": { 00:21:57.181 "impl_name": "ssl", 00:21:57.181 "recv_buf_size": 4096, 00:21:57.181 "send_buf_size": 4096, 00:21:57.181 "enable_recv_pipe": true, 00:21:57.181 "enable_quickack": false, 00:21:57.181 "enable_placement_id": 0, 00:21:57.181 "enable_zerocopy_send_server": true, 00:21:57.181 "enable_zerocopy_send_client": false, 00:21:57.181 "zerocopy_threshold": 0, 00:21:57.181 "tls_version": 0, 00:21:57.181 "enable_ktls": false 00:21:57.181 } 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "method": "sock_impl_set_options", 00:21:57.181 "params": { 00:21:57.181 "impl_name": "posix", 00:21:57.181 "recv_buf_size": 2097152, 00:21:57.181 "send_buf_size": 2097152, 00:21:57.181 "enable_recv_pipe": true, 00:21:57.181 "enable_quickack": false, 00:21:57.181 "enable_placement_id": 0, 00:21:57.181 "enable_zerocopy_send_server": true, 00:21:57.181 "enable_zerocopy_send_client": false, 00:21:57.181 "zerocopy_threshold": 0, 00:21:57.181 "tls_version": 0, 00:21:57.181 "enable_ktls": false 00:21:57.181 } 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "method": "sock_impl_set_options", 00:21:57.181 "params": { 00:21:57.181 "impl_name": "uring", 00:21:57.181 "recv_buf_size": 2097152, 00:21:57.181 "send_buf_size": 2097152, 00:21:57.181 "enable_recv_pipe": true, 00:21:57.181 "enable_quickack": false, 00:21:57.181 "enable_placement_id": 0, 00:21:57.181 "enable_zerocopy_send_server": false, 00:21:57.181 "enable_zerocopy_send_client": false, 00:21:57.181 "zerocopy_threshold": 0, 00:21:57.181 "tls_version": 0, 00:21:57.181 "enable_ktls": false 00:21:57.181 } 00:21:57.181 } 00:21:57.181 ] 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "subsystem": "vmd", 00:21:57.181 "config": [] 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "subsystem": "accel", 00:21:57.181 "config": [ 00:21:57.181 { 00:21:57.181 "method": "accel_set_options", 00:21:57.181 "params": { 00:21:57.181 "small_cache_size": 128, 00:21:57.181 "large_cache_size": 16, 00:21:57.181 "task_count": 2048, 00:21:57.181 "sequence_count": 2048, 00:21:57.181 "buf_count": 2048 00:21:57.181 } 00:21:57.181 } 00:21:57.181 ] 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "subsystem": "bdev", 00:21:57.181 "config": [ 00:21:57.181 { 00:21:57.181 "method": "bdev_set_options", 00:21:57.181 "params": { 00:21:57.181 "bdev_io_pool_size": 65535, 00:21:57.181 "bdev_io_cache_size": 256, 00:21:57.181 "bdev_auto_examine": true, 00:21:57.181 "iobuf_small_cache_size": 128, 00:21:57.181 "iobuf_large_cache_size": 16 00:21:57.181 } 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "method": "bdev_raid_set_options", 00:21:57.181 "params": { 00:21:57.181 "process_window_size_kb": 1024, 00:21:57.181 "process_max_bandwidth_mb_sec": 0 00:21:57.181 } 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "method": "bdev_iscsi_set_options", 00:21:57.181 "params": { 00:21:57.181 "timeout_sec": 30 00:21:57.181 } 00:21:57.181 }, 00:21:57.181 { 00:21:57.181 "method": "bdev_nvme_set_options", 00:21:57.181 "params": { 00:21:57.182 "action_on_timeout": "none", 00:21:57.182 "timeout_us": 0, 00:21:57.182 "timeout_admin_us": 0, 00:21:57.182 "keep_alive_timeout_ms": 10000, 00:21:57.182 "arbitration_burst": 0, 00:21:57.182 "low_priority_weight": 0, 00:21:57.182 "medium_priority_weight": 0, 00:21:57.182 "high_priority_weight": 0, 00:21:57.182 "nvme_adminq_poll_period_us": 10000, 00:21:57.182 "nvme_ioq_poll_period_us": 0, 00:21:57.182 "io_queue_requests": 512, 00:21:57.182 "delay_cmd_submit": true, 00:21:57.182 "transport_retry_count": 4, 00:21:57.182 "bdev_retry_count": 3, 00:21:57.182 "transport_ack_timeout": 0, 00:21:57.182 "ctrlr_loss_timeout_sec": 0, 00:21:57.182 "reconnect_delay_sec": 0, 00:21:57.182 "fast_io_fail_timeout_sec": 0, 00:21:57.182 "disable_auto_failback": false, 00:21:57.182 "generate_uuids": false, 00:21:57.182 "transport_tos": 0, 00:21:57.182 "nvme_error_stat": false, 00:21:57.182 "rdma_srq_size": 0, 00:21:57.182 "io_path_stat": false, 00:21:57.182 "allow_accel_sequence": false, 00:21:57.182 "rdma_max_cq_size": 0, 00:21:57.182 "rdma_cm_event_timeout_ms": 0, 00:21:57.182 "dhchap_digests": [ 00:21:57.182 "sha256", 00:21:57.182 "sha384", 00:21:57.182 "sha512" 00:21:57.182 ], 00:21:57.182 "dhchap_dhgroups": [ 00:21:57.182 "null", 00:21:57.182 "ffdhe2048", 00:21:57.182 "ffdhe3072", 00:21:57.182 "ffdhe4096", 00:21:57.182 "ffdhe6144", 00:21:57.182 "ffdhe8192" 00:21:57.182 ] 00:21:57.182 } 00:21:57.182 }, 00:21:57.182 { 00:21:57.182 "method": "bdev_nvme_attach_controller", 00:21:57.182 "params": { 00:21:57.182 "name": "nvme0", 00:21:57.182 "trtype": "TCP", 00:21:57.182 "adrfam": "IPv4", 00:21:57.182 "traddr": "127.0.0.1", 00:21:57.182 "trsvcid": "4420", 00:21:57.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.182 "prchk_reftag": false, 00:21:57.182 "prchk_guard": false, 00:21:57.182 "ctrlr_loss_timeout_sec": 0, 00:21:57.182 "reconnect_delay_sec": 0, 00:21:57.182 "fast_io_fail_timeout_sec": 0, 00:21:57.182 "psk": "key0", 00:21:57.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:57.182 "hdgst": false, 00:21:57.182 "ddgst": false 00:21:57.182 } 00:21:57.182 }, 00:21:57.182 { 00:21:57.182 "method": "bdev_nvme_set_hotplug", 00:21:57.182 "params": { 00:21:57.182 "period_us": 100000, 00:21:57.182 "enable": false 00:21:57.182 } 00:21:57.182 }, 00:21:57.182 { 00:21:57.182 "method": "bdev_wait_for_examine" 00:21:57.182 } 00:21:57.182 ] 00:21:57.182 }, 00:21:57.182 { 00:21:57.182 "subsystem": "nbd", 00:21:57.182 "config": [] 00:21:57.182 } 00:21:57.182 ] 00:21:57.182 }' 00:21:57.182 20:00:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:57.182 [2024-07-24 20:00:25.790440] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:21:57.182 [2024-07-24 20:00:25.790534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84789 ] 00:21:57.439 [2024-07-24 20:00:25.929455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.439 [2024-07-24 20:00:26.037527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.696 [2024-07-24 20:00:26.170574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:57.696 [2024-07-24 20:00:26.224071] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.261 20:00:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.261 20:00:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:21:58.261 20:00:26 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:58.261 20:00:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.261 20:00:26 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:58.520 20:00:27 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:58.520 20:00:27 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:58.520 20:00:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:58.520 20:00:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:58.520 20:00:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.520 20:00:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.520 20:00:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.778 20:00:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:58.778 20:00:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:58.778 20:00:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:58.778 20:00:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:58.778 20:00:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:58.778 20:00:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:58.778 20:00:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:59.343 20:00:27 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:59.343 20:00:27 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:59.343 20:00:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:59.343 20:00:27 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:59.343 20:00:28 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:59.343 20:00:28 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:59.343 20:00:28 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.ufXVMwCbGU /tmp/tmp.eVhG1hSlVg 00:21:59.343 20:00:28 keyring_file -- keyring/file.sh@20 -- # killprocess 84789 00:21:59.343 20:00:28 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84789 ']' 00:21:59.343 20:00:28 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84789 00:21:59.343 20:00:28 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:59.343 20:00:28 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.343 20:00:28 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84789 00:21:59.601 killing process with pid 84789 00:21:59.601 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.601 00:21:59.601 Latency(us) 00:21:59.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.601 =================================================================================================================== 00:21:59.601 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84789' 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@969 -- # kill 84789 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@974 -- # wait 84789 00:21:59.601 20:00:28 keyring_file -- keyring/file.sh@21 -- # killprocess 84515 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 84515 ']' 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@954 -- # kill -0 84515 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@955 -- # uname 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:59.601 20:00:28 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84515 00:21:59.859 killing process with pid 84515 00:21:59.859 20:00:28 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:59.859 20:00:28 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:59.859 20:00:28 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84515' 00:21:59.859 20:00:28 keyring_file -- common/autotest_common.sh@969 -- # kill 84515 00:21:59.859 [2024-07-24 20:00:28.279203] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:59.859 20:00:28 keyring_file -- common/autotest_common.sh@974 -- # wait 84515 00:22:00.117 ************************************ 00:22:00.117 END TEST keyring_file 00:22:00.117 ************************************ 00:22:00.117 00:22:00.117 real 0m17.292s 00:22:00.117 user 0m43.610s 00:22:00.117 sys 0m3.244s 00:22:00.117 20:00:28 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:00.117 20:00:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:00.117 20:00:28 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:22:00.117 20:00:28 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:00.117 20:00:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:00.117 20:00:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:00.117 20:00:28 -- common/autotest_common.sh@10 -- # set +x 00:22:00.117 ************************************ 00:22:00.117 START TEST keyring_linux 00:22:00.117 ************************************ 00:22:00.117 20:00:28 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:00.375 * Looking for test storage... 00:22:00.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:00.375 20:00:28 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:00.375 20:00:28 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:69cdc0e8-4c23-4318-834b-1d87efff05de 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=69cdc0e8-4c23-4318-834b-1d87efff05de 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.375 20:00:28 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:00.375 20:00:28 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.375 20:00:28 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.375 20:00:28 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.376 20:00:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.376 20:00:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.376 20:00:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.376 20:00:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:00.376 20:00:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:00.376 /tmp/:spdk-test:key0 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:22:00.376 20:00:28 keyring_linux -- nvmf/common.sh@705 -- # python - 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:00.376 20:00:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:00.376 /tmp/:spdk-test:key1 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84913 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:00.376 20:00:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84913 00:22:00.376 20:00:28 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84913 ']' 00:22:00.376 20:00:28 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.376 20:00:28 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.376 20:00:28 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.376 20:00:28 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.376 20:00:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:00.376 [2024-07-24 20:00:29.011452] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:22:00.376 [2024-07-24 20:00:29.011572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84913 ] 00:22:00.635 [2024-07-24 20:00:29.149308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.635 [2024-07-24 20:00:29.276989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.893 [2024-07-24 20:00:29.330945] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:01.481 20:00:29 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.481 20:00:29 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:01.481 20:00:30 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:01.481 [2024-07-24 20:00:30.006334] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.481 null0 00:22:01.481 [2024-07-24 20:00:30.038289] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.481 [2024-07-24 20:00:30.038543] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.481 20:00:30 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:01.481 284011066 00:22:01.481 20:00:30 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:01.481 202684122 00:22:01.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:01.481 20:00:30 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84931 00:22:01.481 20:00:30 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:01.481 20:00:30 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84931 /var/tmp/bperf.sock 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 84931 ']' 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.481 20:00:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:01.481 [2024-07-24 20:00:30.128222] Starting SPDK v24.09-pre git sha1 0c322284f / DPDK 24.03.0 initialization... 00:22:01.481 [2024-07-24 20:00:30.128570] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84931 ] 00:22:01.739 [2024-07-24 20:00:30.266449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.739 [2024-07-24 20:00:30.398342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.672 20:00:31 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.672 20:00:31 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:22:02.672 20:00:31 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:02.672 20:00:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:02.931 20:00:31 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:02.931 20:00:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:03.189 [2024-07-24 20:00:31.733119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:22:03.189 20:00:31 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:03.189 20:00:31 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:03.447 [2024-07-24 20:00:32.065444] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.705 nvme0n1 00:22:03.705 20:00:32 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:03.705 20:00:32 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:03.705 20:00:32 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:03.705 20:00:32 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:03.705 20:00:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.705 20:00:32 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:03.963 20:00:32 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:03.963 20:00:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:03.963 20:00:32 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:03.963 20:00:32 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:03.963 20:00:32 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:03.963 20:00:32 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:03.963 20:00:32 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:04.222 20:00:32 keyring_linux -- keyring/linux.sh@25 -- # sn=284011066 00:22:04.222 20:00:32 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:04.222 20:00:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:04.222 20:00:32 keyring_linux -- keyring/linux.sh@26 -- # [[ 284011066 == \2\8\4\0\1\1\0\6\6 ]] 00:22:04.222 20:00:32 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 284011066 00:22:04.222 20:00:32 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:04.222 20:00:32 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:04.222 Running I/O for 1 seconds... 00:22:05.160 00:22:05.160 Latency(us) 00:22:05.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.160 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:05.160 nvme0n1 : 1.05 12357.15 48.27 0.00 0.00 10246.30 9115.46 49330.73 00:22:05.160 =================================================================================================================== 00:22:05.160 Total : 12357.15 48.27 0.00 0.00 10246.30 9115.46 49330.73 00:22:05.160 0 00:22:05.417 20:00:33 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:05.417 20:00:33 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:05.674 20:00:34 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:05.674 20:00:34 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:05.674 20:00:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:05.674 20:00:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:05.674 20:00:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:05.674 20:00:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:05.931 20:00:34 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:05.931 20:00:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:05.931 20:00:34 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:05.931 20:00:34 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:05.931 20:00:34 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:22:05.931 20:00:34 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:05.931 20:00:34 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:22:05.931 20:00:34 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.931 20:00:34 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:22:05.931 20:00:34 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.932 20:00:34 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:05.932 20:00:34 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:06.190 [2024-07-24 20:00:34.823618] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-07-24 20:00:34.823785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1462460 (107): Transport endpoint is not connected 00:22:06.190 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:06.190 [2024-07-24 20:00:34.824774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1462460 (9): Bad file descriptor 00:22:06.190 [2024-07-24 20:00:34.825773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.190 [2024-07-24 20:00:34.825949] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:06.190 [2024-07-24 20:00:34.826182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.190 request: 00:22:06.190 { 00:22:06.190 "name": "nvme0", 00:22:06.190 "trtype": "tcp", 00:22:06.190 "traddr": "127.0.0.1", 00:22:06.190 "adrfam": "ipv4", 00:22:06.190 "trsvcid": "4420", 00:22:06.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:06.190 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:06.190 "prchk_reftag": false, 00:22:06.190 "prchk_guard": false, 00:22:06.190 "hdgst": false, 00:22:06.190 "ddgst": false, 00:22:06.190 "psk": ":spdk-test:key1", 00:22:06.190 "method": "bdev_nvme_attach_controller", 00:22:06.190 "req_id": 1 00:22:06.190 } 00:22:06.190 Got JSON-RPC error response 00:22:06.190 response: 00:22:06.190 { 00:22:06.190 "code": -5, 00:22:06.190 "message": "Input/output error" 00:22:06.190 } 00:22:06.190 20:00:34 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:22:06.190 20:00:34 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.190 20:00:34 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.190 20:00:34 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@33 -- # sn=284011066 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 284011066 00:22:06.190 1 links removed 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@33 -- # sn=202684122 00:22:06.190 20:00:34 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 202684122 00:22:06.447 1 links removed 00:22:06.447 20:00:34 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84931 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84931 ']' 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84931 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84931 00:22:06.447 killing process with pid 84931 00:22:06.447 Received shutdown signal, test time was about 1.000000 seconds 00:22:06.447 00:22:06.447 Latency(us) 00:22:06.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.447 =================================================================================================================== 00:22:06.447 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84931' 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@969 -- # kill 84931 00:22:06.447 20:00:34 keyring_linux -- common/autotest_common.sh@974 -- # wait 84931 00:22:06.447 20:00:35 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84913 00:22:06.447 20:00:35 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 84913 ']' 00:22:06.447 20:00:35 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 84913 00:22:06.447 20:00:35 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:22:06.447 20:00:35 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.447 20:00:35 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84913 00:22:06.704 killing process with pid 84913 00:22:06.704 20:00:35 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:06.704 20:00:35 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:06.704 20:00:35 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84913' 00:22:06.704 20:00:35 keyring_linux -- common/autotest_common.sh@969 -- # kill 84913 00:22:06.704 20:00:35 keyring_linux -- common/autotest_common.sh@974 -- # wait 84913 00:22:06.962 ************************************ 00:22:06.962 END TEST keyring_linux 00:22:06.962 ************************************ 00:22:06.962 00:22:06.962 real 0m6.811s 00:22:06.962 user 0m13.516s 00:22:06.962 sys 0m1.636s 00:22:06.962 20:00:35 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:06.962 20:00:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:06.962 20:00:35 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:22:06.962 20:00:35 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:06.962 20:00:35 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:06.962 20:00:35 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:06.962 20:00:35 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:22:06.962 20:00:35 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:22:06.962 20:00:35 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:22:06.962 20:00:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.962 20:00:35 -- common/autotest_common.sh@10 -- # set +x 00:22:06.962 20:00:35 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:22:06.962 20:00:35 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:06.962 20:00:35 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:06.962 20:00:35 -- common/autotest_common.sh@10 -- # set +x 00:22:08.862 INFO: APP EXITING 00:22:08.862 INFO: killing all VMs 00:22:08.862 INFO: killing vhost app 00:22:08.862 INFO: EXIT DONE 00:22:09.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:09.120 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:09.377 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:09.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:09.943 Cleaning 00:22:09.943 Removing: /var/run/dpdk/spdk0/config 00:22:09.943 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:09.943 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:09.943 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:09.943 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:09.943 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:09.943 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:09.943 Removing: /var/run/dpdk/spdk1/config 00:22:09.943 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:09.943 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:09.943 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:09.943 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:09.943 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:09.943 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:09.943 Removing: /var/run/dpdk/spdk2/config 00:22:09.943 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:09.943 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:09.943 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:09.943 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:09.943 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:09.943 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:09.943 Removing: /var/run/dpdk/spdk3/config 00:22:09.943 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:09.943 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:09.943 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:09.943 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:09.943 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:09.943 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:09.943 Removing: /var/run/dpdk/spdk4/config 00:22:09.943 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:10.202 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:10.202 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:10.202 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:10.202 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:10.202 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:10.202 Removing: /dev/shm/nvmf_trace.0 00:22:10.202 Removing: /dev/shm/spdk_tgt_trace.pid58720 00:22:10.202 Removing: /var/run/dpdk/spdk0 00:22:10.202 Removing: /var/run/dpdk/spdk1 00:22:10.202 Removing: /var/run/dpdk/spdk2 00:22:10.202 Removing: /var/run/dpdk/spdk3 00:22:10.203 Removing: /var/run/dpdk/spdk4 00:22:10.203 Removing: /var/run/dpdk/spdk_pid58575 00:22:10.203 Removing: /var/run/dpdk/spdk_pid58720 00:22:10.203 Removing: /var/run/dpdk/spdk_pid58918 00:22:10.203 Removing: /var/run/dpdk/spdk_pid58999 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59032 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59136 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59154 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59282 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59473 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59619 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59684 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59760 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59845 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59923 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59957 00:22:10.203 Removing: /var/run/dpdk/spdk_pid59993 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60054 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60154 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60593 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60645 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60696 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60712 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60779 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60795 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60862 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60878 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60924 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60942 00:22:10.203 Removing: /var/run/dpdk/spdk_pid60982 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61000 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61128 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61158 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61237 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61541 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61560 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61591 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61604 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61620 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61639 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61658 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61679 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61698 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61717 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61727 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61752 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61765 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61786 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61805 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61823 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61834 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61853 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61872 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61893 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61924 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61937 00:22:10.203 Removing: /var/run/dpdk/spdk_pid61972 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62032 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62060 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62075 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62104 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62113 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62121 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62165 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62184 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62213 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62222 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62232 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62247 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62256 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62266 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62275 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62285 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62319 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62344 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62357 00:22:10.203 Removing: /var/run/dpdk/spdk_pid62385 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62395 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62408 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62447 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62460 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62492 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62495 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62508 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62521 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62523 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62536 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62549 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62551 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62625 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62678 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62784 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62817 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62862 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62882 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62899 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62919 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62954 00:22:10.461 Removing: /var/run/dpdk/spdk_pid62975 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63045 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63067 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63111 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63186 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63253 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63284 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63368 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63416 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63453 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63667 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63765 00:22:10.461 Removing: /var/run/dpdk/spdk_pid63793 00:22:10.461 Removing: /var/run/dpdk/spdk_pid64146 00:22:10.461 Removing: /var/run/dpdk/spdk_pid64184 00:22:10.461 Removing: /var/run/dpdk/spdk_pid64483 00:22:10.461 Removing: /var/run/dpdk/spdk_pid64886 00:22:10.461 Removing: /var/run/dpdk/spdk_pid65159 00:22:10.461 Removing: /var/run/dpdk/spdk_pid65936 00:22:10.461 Removing: /var/run/dpdk/spdk_pid66764 00:22:10.461 Removing: /var/run/dpdk/spdk_pid66880 00:22:10.462 Removing: /var/run/dpdk/spdk_pid66948 00:22:10.462 Removing: /var/run/dpdk/spdk_pid68205 00:22:10.462 Removing: /var/run/dpdk/spdk_pid68461 00:22:10.462 Removing: /var/run/dpdk/spdk_pid71845 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72150 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72258 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72390 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72419 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72442 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72474 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72560 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72689 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72837 00:22:10.462 Removing: /var/run/dpdk/spdk_pid72918 00:22:10.462 Removing: /var/run/dpdk/spdk_pid73112 00:22:10.462 Removing: /var/run/dpdk/spdk_pid73195 00:22:10.462 Removing: /var/run/dpdk/spdk_pid73288 00:22:10.462 Removing: /var/run/dpdk/spdk_pid73600 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74010 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74012 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74293 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74307 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74325 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74358 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74363 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74669 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74714 00:22:10.462 Removing: /var/run/dpdk/spdk_pid74988 00:22:10.462 Removing: /var/run/dpdk/spdk_pid75192 00:22:10.462 Removing: /var/run/dpdk/spdk_pid75572 00:22:10.462 Removing: /var/run/dpdk/spdk_pid76083 00:22:10.462 Removing: /var/run/dpdk/spdk_pid76890 00:22:10.462 Removing: /var/run/dpdk/spdk_pid77470 00:22:10.462 Removing: /var/run/dpdk/spdk_pid77476 00:22:10.462 Removing: /var/run/dpdk/spdk_pid79384 00:22:10.462 Removing: /var/run/dpdk/spdk_pid79437 00:22:10.462 Removing: /var/run/dpdk/spdk_pid79492 00:22:10.462 Removing: /var/run/dpdk/spdk_pid79552 00:22:10.462 Removing: /var/run/dpdk/spdk_pid79673 00:22:10.462 Removing: /var/run/dpdk/spdk_pid79728 00:22:10.462 Removing: /var/run/dpdk/spdk_pid79795 00:22:10.462 Removing: /var/run/dpdk/spdk_pid79854 00:22:10.462 Removing: /var/run/dpdk/spdk_pid80167 00:22:10.462 Removing: /var/run/dpdk/spdk_pid81335 00:22:10.462 Removing: /var/run/dpdk/spdk_pid81475 00:22:10.462 Removing: /var/run/dpdk/spdk_pid81718 00:22:10.462 Removing: /var/run/dpdk/spdk_pid82266 00:22:10.462 Removing: /var/run/dpdk/spdk_pid82425 00:22:10.462 Removing: /var/run/dpdk/spdk_pid82583 00:22:10.720 Removing: /var/run/dpdk/spdk_pid82680 00:22:10.720 Removing: /var/run/dpdk/spdk_pid82920 00:22:10.720 Removing: /var/run/dpdk/spdk_pid83029 00:22:10.720 Removing: /var/run/dpdk/spdk_pid83702 00:22:10.720 Removing: /var/run/dpdk/spdk_pid83737 00:22:10.721 Removing: /var/run/dpdk/spdk_pid83767 00:22:10.721 Removing: /var/run/dpdk/spdk_pid84021 00:22:10.721 Removing: /var/run/dpdk/spdk_pid84056 00:22:10.721 Removing: /var/run/dpdk/spdk_pid84086 00:22:10.721 Removing: /var/run/dpdk/spdk_pid84515 00:22:10.721 Removing: /var/run/dpdk/spdk_pid84532 00:22:10.721 Removing: /var/run/dpdk/spdk_pid84789 00:22:10.721 Removing: /var/run/dpdk/spdk_pid84913 00:22:10.721 Removing: /var/run/dpdk/spdk_pid84931 00:22:10.721 Clean 00:22:10.721 20:00:39 -- common/autotest_common.sh@1451 -- # return 0 00:22:10.721 20:00:39 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:22:10.721 20:00:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.721 20:00:39 -- common/autotest_common.sh@10 -- # set +x 00:22:10.721 20:00:39 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:22:10.721 20:00:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.721 20:00:39 -- common/autotest_common.sh@10 -- # set +x 00:22:10.721 20:00:39 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:10.721 20:00:39 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:10.721 20:00:39 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:10.721 20:00:39 -- spdk/autotest.sh@395 -- # hash lcov 00:22:10.721 20:00:39 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:10.721 20:00:39 -- spdk/autotest.sh@397 -- # hostname 00:22:10.721 20:00:39 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:10.979 geninfo: WARNING: invalid characters removed from testname! 00:22:43.081 20:01:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:43.081 20:01:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:44.981 20:01:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:47.513 20:01:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:50.043 20:01:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:53.330 20:01:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:55.862 20:01:24 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:55.862 20:01:24 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:55.862 20:01:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:55.862 20:01:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.862 20:01:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.862 20:01:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.862 20:01:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.862 20:01:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.862 20:01:24 -- paths/export.sh@5 -- $ export PATH 00:22:55.862 20:01:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.862 20:01:24 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:55.862 20:01:24 -- common/autobuild_common.sh@447 -- $ date +%s 00:22:55.862 20:01:24 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721851284.XXXXXX 00:22:55.862 20:01:24 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721851284.OPv52c 00:22:55.862 20:01:24 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:22:55.862 20:01:24 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:22:55.862 20:01:24 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:55.862 20:01:24 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:55.862 20:01:24 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:55.862 20:01:24 -- common/autobuild_common.sh@463 -- $ get_config_params 00:22:55.862 20:01:24 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:22:55.862 20:01:24 -- common/autotest_common.sh@10 -- $ set +x 00:22:55.862 20:01:24 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:55.862 20:01:24 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:22:55.862 20:01:24 -- pm/common@17 -- $ local monitor 00:22:55.862 20:01:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:55.862 20:01:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:55.862 20:01:24 -- pm/common@25 -- $ sleep 1 00:22:55.862 20:01:24 -- pm/common@21 -- $ date +%s 00:22:55.862 20:01:24 -- pm/common@21 -- $ date +%s 00:22:55.862 20:01:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721851284 00:22:55.862 20:01:24 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721851284 00:22:55.862 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721851284_collect-cpu-load.pm.log 00:22:55.862 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721851284_collect-vmstat.pm.log 00:22:56.795 20:01:25 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:22:56.795 20:01:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:56.795 20:01:25 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:56.795 20:01:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:56.795 20:01:25 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:56.795 20:01:25 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:56.795 20:01:25 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:56.795 20:01:25 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:56.795 20:01:25 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:56.795 20:01:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:56.795 20:01:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:56.795 20:01:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:56.795 20:01:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:56.795 20:01:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:56.795 20:01:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:56.795 20:01:25 -- pm/common@44 -- $ pid=86627 00:22:56.796 20:01:25 -- pm/common@50 -- $ kill -TERM 86627 00:22:56.796 20:01:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:56.796 20:01:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:56.796 20:01:25 -- pm/common@44 -- $ pid=86629 00:22:56.796 20:01:25 -- pm/common@50 -- $ kill -TERM 86629 00:22:56.796 + [[ -n 5094 ]] 00:22:56.796 + sudo kill 5094 00:22:56.805 [Pipeline] } 00:22:56.825 [Pipeline] // timeout 00:22:56.832 [Pipeline] } 00:22:56.851 [Pipeline] // stage 00:22:56.856 [Pipeline] } 00:22:56.873 [Pipeline] // catchError 00:22:56.883 [Pipeline] stage 00:22:56.885 [Pipeline] { (Stop VM) 00:22:56.900 [Pipeline] sh 00:22:57.179 + vagrant halt 00:23:01.368 ==> default: Halting domain... 00:23:06.643 [Pipeline] sh 00:23:06.919 + vagrant destroy -f 00:23:11.129 ==> default: Removing domain... 00:23:11.141 [Pipeline] sh 00:23:11.421 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:11.439 [Pipeline] } 00:23:11.457 [Pipeline] // stage 00:23:11.463 [Pipeline] } 00:23:11.479 [Pipeline] // dir 00:23:11.484 [Pipeline] } 00:23:11.501 [Pipeline] // wrap 00:23:11.508 [Pipeline] } 00:23:11.521 [Pipeline] // catchError 00:23:11.531 [Pipeline] stage 00:23:11.533 [Pipeline] { (Epilogue) 00:23:11.546 [Pipeline] sh 00:23:11.825 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:18.437 [Pipeline] catchError 00:23:18.439 [Pipeline] { 00:23:18.455 [Pipeline] sh 00:23:18.755 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:18.755 Artifacts sizes are good 00:23:18.764 [Pipeline] } 00:23:18.780 [Pipeline] // catchError 00:23:18.792 [Pipeline] archiveArtifacts 00:23:18.799 Archiving artifacts 00:23:18.973 [Pipeline] cleanWs 00:23:18.986 [WS-CLEANUP] Deleting project workspace... 00:23:18.986 [WS-CLEANUP] Deferred wipeout is used... 00:23:18.992 [WS-CLEANUP] done 00:23:18.994 [Pipeline] } 00:23:19.015 [Pipeline] // stage 00:23:19.023 [Pipeline] } 00:23:19.040 [Pipeline] // node 00:23:19.046 [Pipeline] End of Pipeline 00:23:19.082 Finished: SUCCESS